Compare commits

..

No commits in common. "main" and "before-harp" have entirely different histories.

636 changed files with 50425 additions and 305734 deletions

View file

@ -1,65 +0,0 @@
name: CI
on:
push:
branches:
- main
pull_request:
jobs:
coverage:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Setup Ruby
uses: ruby/setup-ruby@v1
with:
ruby-version: .ruby-version
bundler-cache: true
- name: Bootstrap
run: bin/bootstrap
- name: Coverage
run: bundle exec bake coverage
lint:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Setup Ruby
uses: ruby/setup-ruby@v1
with:
ruby-version: .ruby-version
bundler-cache: true
- name: Bootstrap
run: bin/bootstrap
- name: Lint
run: bundle exec bake lint
debug:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Setup Ruby
uses: ruby/setup-ruby@v1
with:
ruby-version: .ruby-version
bundler-cache: true
- name: Bootstrap
run: bin/bootstrap
- name: Debug Build
run: bundle exec bake debug

14
.gitignore vendored
View file

@ -1,3 +1,11 @@
www
gemini
Tests/*/actual
.bundle
_blog
public/js/*.js
public/css/*.css
discussd/discuss.dirty
public/blog
public/proj
node_modules
public/s42/.htaccess
public/images/blog
public/f

View file

@ -1 +0,0 @@
4.0.1

1
.tm_properties Normal file
View file

@ -0,0 +1 @@
exclude = "{$exclude,_blog,public/proj,public/blog,*.min.js}"

View file

@ -1,3 +0,0 @@
{
"file_scan_exclusions": ["public/tweets/", "www", ".DS_Store", ".git"]
}

View file

@ -1,84 +0,0 @@
# Repository Guidelines
## Project Structure & Module Organization
This repository is a Ruby static-site generator (Pressa) that outputs both HTML and Gemini formats.
- Generator code: `lib/pressa/` (entrypoint: `lib/pressa.rb`)
- Build/deploy/draft tasks: `bake.rb`
- Tests: `test/`
- Site config: `site.toml`, `projects.toml`
- Published posts: `posts/YYYY/MM/*.md`
- Static and renderable public content: `public/`
- Draft posts: `public/drafts/`
- Generated HTML output: `www/` (safe to delete/regenerate)
- Generated Gemini output: `gemini/` (safe to delete/regenerate)
- Gemini protocol reference docs: `gemini-docs/`
- CI: `.github/workflows/ci.yml` (runs coverage, lint, and debug build)
Keep new code under the existing `Pressa` module structure (for example `lib/pressa/posts`, `lib/pressa/projects`, `lib/pressa/views`, `lib/pressa/config`, `lib/pressa/utils`) and add matching tests under `test/`.
## Setup, Build, Test, and Development Commands
- Use `rbenv exec` for Ruby commands in this repository (for example `rbenv exec bundle exec ...`) to ensure the project Ruby version is used.
- `bin/bootstrap`: install prerequisites and gems (uses `rbenv` when available).
- `rbenv exec bundle exec bake debug`: build HTML for `http://localhost:8000` into `www/`.
- `rbenv exec bundle exec bake serve`: serve `www/` via WEBrick on port 8000.
- `rbenv exec bundle exec bake watch target=debug`: Linux-only autorebuild loop (`inotifywait` required).
- `rbenv exec bundle exec bake mudge|beta|release`: build HTML with environment-specific base URLs.
- `rbenv exec bundle exec bake gemini`: build Gemini capsule into `gemini/`.
- `rbenv exec bundle exec bake publish_beta`: build and rsync `www/` to beta host.
- `rbenv exec bundle exec bake publish_gemini`: build and rsync `gemini/` to production host.
- `rbenv exec bundle exec bake publish`: build and rsync both HTML and Gemini to production.
- `rbenv exec bundle exec bake clean`: remove `www/` and `gemini/`.
- `rbenv exec bundle exec bake test`: run test suite.
- `rbenv exec bundle exec bake guard`: run Guard for continuous testing.
- `rbenv exec bundle exec bake lint`: lint code with StandardRB.
- `rbenv exec bundle exec bake lint_fix`: auto-fix lint issues.
- `rbenv exec bundle exec bake coverage`: run tests and report `lib/` line coverage.
- `rbenv exec bundle exec bake coverage_regression baseline=merge-base`: compare coverage to a baseline and fail on regression (override `baseline` as needed).
## Draft Workflow
- `rbenv exec bundle exec bake new_draft "Post Title"` creates `public/drafts/<slug>.md`.
- `rbenv exec bundle exec bake drafts` lists available drafts.
- `rbenv exec bundle exec bake publish_draft public/drafts/<slug>.md` moves draft to `posts/YYYY/MM/` and updates `Date` and `Timestamp`.
## Content and Metadata Requirements
Posts must include YAML front matter. Required keys (enforced by `Pressa::Posts::PostMetadata`) are:
- `Title`
- `Author`
- `Date`
- `Timestamp`
Optional keys include `Tags`, `Link`, `Scripts`, and `Styles`.
## Coding Style & Naming Conventions
- Ruby (see `.ruby-version`).
- Follow idiomatic Ruby style and keep code `bake lint`-clean.
- Use 2-space indentation and descriptive `snake_case` names for methods/variables, `UpperCamelCase` for classes/modules.
- Prefer small, focused classes for plugins, views, renderers, and config loaders.
- Do not hand-edit generated files in `www/` or `gemini/`.
## Testing Guidelines
- Use Minitest under `test/` (for example `test/posts`, `test/config`, `test/views`).
- Add regression tests for parser, rendering, feed, and generator behavior changes.
- Before submitting, run:
- `rbenv exec bundle exec bake test`
- `rbenv exec bundle exec bake coverage`
- `rbenv exec bundle exec bake lint`
- `rbenv exec bundle exec bake debug`
## Commit & Pull Request Guidelines
- Use concise, imperative commit subjects (history examples: `Fix internal permalink regression in archives`).
- Keep commits scoped to one concern (generator logic, content, or deployment changes).
- In PRs, include motivation, verification commands run, and deployment impact.
- Include screenshots when changing rendered layout/CSS output.
## Deployment & Security Notes
- Deployment is defined in `bake.rb` via rsync over SSH.
- Current publish host is `mudge` with:
- production HTML: `/var/www/samhuri.net/public`
- beta HTML: `/var/www/beta.samhuri.net/public`
- production Gemini: `/var/gemini/samhuri.net`
- `bake publish` deploys both HTML and Gemini to production.
- Validate `www/` and `gemini/` before publishing to avoid shipping stale assets.
- Never commit credentials, SSH keys, or other secrets.

View file

Before

Width:  |  Height:  |  Size: 136 KiB

After

Width:  |  Height:  |  Size: 136 KiB

View file

Before

Width:  |  Height:  |  Size: 29 KiB

After

Width:  |  Height:  |  Size: 29 KiB

View file

Before

Width:  |  Height:  |  Size: 30 KiB

After

Width:  |  Height:  |  Size: 30 KiB

View file

Before

Width:  |  Height:  |  Size: 2 KiB

After

Width:  |  Height:  |  Size: 2 KiB

View file

Before

Width:  |  Height:  |  Size: 2.1 KiB

After

Width:  |  Height:  |  Size: 2.1 KiB

View file

Before

Width:  |  Height:  |  Size: 6.5 KiB

After

Width:  |  Height:  |  Size: 6.5 KiB

19
Gemfile
View file

@ -1,15 +1,6 @@
source "https://rubygems.org"
source 'https://rubygems.org'
gem "phlex", "~> 2.3"
gem "kramdown", "~> 2.5"
gem "kramdown-parser-gfm", "~> 1.1"
gem "rouge", "~> 4.6"
gem "dry-struct", "~> 1.8"
gem "builder", "~> 3.3"
gem "bake", "~> 0.20"
group :development, :test do
gem "guard", "~> 2.18"
gem "minitest", "~> 6.0"
gem "standard", "~> 1.43"
end
gem 'builder'
gem 'json'
gem 'mustache'
gem 'rdiscount'

View file

@ -1,178 +1,16 @@
GEM
remote: https://rubygems.org/
specs:
ast (2.4.3)
bake (0.24.1)
bigdecimal
samovar (~> 2.1)
bigdecimal (4.0.1)
builder (3.3.0)
coderay (1.1.3)
concurrent-ruby (1.3.6)
console (1.34.2)
fiber-annotation
fiber-local (~> 1.1)
json
dry-core (1.2.0)
concurrent-ruby (~> 1.0)
logger
zeitwerk (~> 2.6)
dry-inflector (1.3.1)
dry-logic (1.6.0)
bigdecimal
concurrent-ruby (~> 1.0)
dry-core (~> 1.1)
zeitwerk (~> 2.6)
dry-struct (1.8.0)
dry-core (~> 1.1)
dry-types (~> 1.8, >= 1.8.2)
ice_nine (~> 0.11)
zeitwerk (~> 2.6)
dry-types (1.9.1)
bigdecimal (>= 3.0)
concurrent-ruby (~> 1.0)
dry-core (~> 1.0)
dry-inflector (~> 1.0)
dry-logic (~> 1.4)
zeitwerk (~> 2.6)
ffi (1.17.3-aarch64-linux-gnu)
ffi (1.17.3-aarch64-linux-musl)
ffi (1.17.3-arm-linux-gnu)
ffi (1.17.3-arm-linux-musl)
ffi (1.17.3-arm64-darwin)
ffi (1.17.3-x86-linux-gnu)
ffi (1.17.3-x86-linux-musl)
ffi (1.17.3-x86_64-darwin)
ffi (1.17.3-x86_64-linux-gnu)
ffi (1.17.3-x86_64-linux-musl)
fiber-annotation (0.2.0)
fiber-local (1.1.0)
fiber-storage
fiber-storage (1.0.1)
formatador (1.2.3)
reline
guard (2.20.1)
formatador (>= 0.2.4)
listen (>= 2.7, < 4.0)
logger (~> 1.6)
lumberjack (>= 1.0.12, < 2.0)
nenv (~> 0.1)
notiffany (~> 0.0)
pry (>= 0.13.0)
shellany (~> 0.0)
thor (>= 0.18.1)
ice_nine (0.11.2)
io-console (0.8.2)
json (2.19.2)
kramdown (2.5.2)
rexml (>= 3.4.4)
kramdown-parser-gfm (1.1.0)
kramdown (~> 2.0)
language_server-protocol (3.17.0.5)
lint_roller (1.1.0)
listen (3.10.0)
logger
rb-fsevent (~> 0.10, >= 0.10.3)
rb-inotify (~> 0.9, >= 0.9.10)
logger (1.7.0)
lumberjack (1.4.2)
mapping (1.1.3)
method_source (1.1.0)
minitest (6.0.1)
prism (~> 1.5)
nenv (0.3.0)
notiffany (0.1.3)
nenv (~> 0.1)
shellany (~> 0.0)
parallel (1.27.0)
parser (3.3.10.1)
ast (~> 2.4.1)
racc
phlex (2.4.1)
refract (~> 1.0)
zeitwerk (~> 2.7)
prism (1.9.0)
pry (0.16.0)
coderay (~> 1.1)
method_source (~> 1.0)
reline (>= 0.6.0)
racc (1.8.1)
rainbow (3.1.1)
rb-fsevent (0.11.2)
rb-inotify (0.11.1)
ffi (~> 1.0)
refract (1.1.0)
prism
zeitwerk
regexp_parser (2.11.3)
reline (0.6.3)
io-console (~> 0.5)
rexml (3.4.4)
rouge (4.7.0)
rubocop (1.82.1)
json (~> 2.3)
language_server-protocol (~> 3.17.0.2)
lint_roller (~> 1.1.0)
parallel (~> 1.10)
parser (>= 3.3.0.2)
rainbow (>= 2.2.2, < 4.0)
regexp_parser (>= 2.9.3, < 3.0)
rubocop-ast (>= 1.48.0, < 2.0)
ruby-progressbar (~> 1.7)
unicode-display_width (>= 2.4.0, < 4.0)
rubocop-ast (1.49.0)
parser (>= 3.3.7.2)
prism (~> 1.7)
rubocop-performance (1.26.1)
lint_roller (~> 1.1)
rubocop (>= 1.75.0, < 2.0)
rubocop-ast (>= 1.47.1, < 2.0)
ruby-progressbar (1.13.0)
samovar (2.4.1)
console (~> 1.0)
mapping (~> 1.0)
shellany (0.0.1)
standard (1.53.0)
language_server-protocol (~> 3.17.0.2)
lint_roller (~> 1.0)
rubocop (~> 1.82.0)
standard-custom (~> 1.0.0)
standard-performance (~> 1.8)
standard-custom (1.0.2)
lint_roller (~> 1.0)
rubocop (~> 1.50)
standard-performance (1.9.0)
lint_roller (~> 1.1)
rubocop-performance (~> 1.26.0)
thor (1.5.0)
unicode-display_width (3.2.0)
unicode-emoji (~> 4.1)
unicode-emoji (4.2.0)
zeitwerk (2.7.4)
builder (3.0.0)
json (1.6.1)
mustache (0.99.4)
rdiscount (1.6.8)
PLATFORMS
aarch64-linux-gnu
aarch64-linux-musl
arm-linux-gnu
arm-linux-musl
arm64-darwin
x86-linux-gnu
x86-linux-musl
x86_64-darwin
x86_64-linux-gnu
x86_64-linux-musl
ruby
DEPENDENCIES
bake (~> 0.20)
builder (~> 3.3)
dry-struct (~> 1.8)
guard (~> 2.18)
kramdown (~> 2.5)
kramdown-parser-gfm (~> 1.1)
minitest (~> 6.0)
phlex (~> 2.3)
rouge (~> 4.6)
standard (~> 1.43)
BUNDLED WITH
4.0.6
builder
json
mustache
rdiscount

56
Makefile Normal file
View file

@ -0,0 +1,56 @@
JAVASCRIPTS=$(shell echo assets/js/*.js)
STYLESHEETS=$(shell echo assets/css/*.css)
POSTS=$(shell echo _blog/published/*.html) $(shell echo _blog/published/*.md)
all: proj blog combine
proj: projects.json templates/proj/index.html templates/proj/project.html
@echo
./bin/projects.js projects.json public/proj
blog: _blog/blog.json templates/blog/index.html templates/blog/post.html $(POSTS)
@echo
cd _blog && git pull
./bin/blog.rb _blog public
minify: $(JAVASCRIPTS) $(STYLESHEETS)
@echo
./bin/minify.sh
combine: minify $(JAVASCRIPTS) $(STYLESHEETS)
@echo
./bin/combine.sh
publish_assets: combine
@echo
./bin/publish.sh --delete public/css public/images public/js
./bin/publish.sh public/f
publish_blog: blog publish_assets
@echo
./bin/publish.sh --delete public/blog
scp public/blog/posts.json bohodev.net:discussd/posts.json
scp discussd/discussd.js bohodev.net:discussd/discussd.js
scp public/s42/.htaccess samhuri.net:s42.ca/.htaccess
ssh bohodev.net bin/restart-discussd.sh
publish_proj: proj publish_assets
@echo
./bin/publish.sh --delete public/proj
publish_index: public/index.html
@echo
./bin/publish.sh public/index.html
publish: publish_index publish_blog publish_proj
@echo
./bin/publish.sh public/.htaccess
./bin/publish.sh public/favicon.ico
clean:
rm -rf public/proj/*
rm -rf public/blog/*
rm public/css/*.css
rm public/js/*.js
.PHONY: proj blog

117
Readme.md
View file

@ -1,117 +0,0 @@
# samhuri.net
Source code for [samhuri.net](https://samhuri.net), powered by a Ruby static site generator.
## Overview
This repository contains the Ruby static-site generator and site content for samhuri.net.
If what you want is an artisanal, hand-crafted, static site generator for your personal blog then this might be a decent starting point. If you want a static site generator for other purposes then this has the bones you need to do that too, by ripping out the bundled plugins for posts and projects and writing your own.
- Generator core: `lib/pressa/` (entrypoint: `lib/pressa.rb`)
- Build tasks and utility workflows: `bake.rb`
- Tests: `test/`
- Config: `site.toml` and `projects.toml`
- Content: `posts/` and `public/`
- Output: `www/` (HTML), `gemini/` (Gemini capsule)
## Requirements
- Ruby (see `.ruby-version`)
- Bundler
- `rbenv` recommended
## Setup
```bash
bin/bootstrap
```
Or manually:
```bash
rbenv install -s "$(cat .ruby-version)"
bundle install
```
## Build And Serve
```bash
bake debug # build for http://localhost:8000
bake serve # serve www/ locally
```
## Configuration
Site metadata and project data are configured with TOML files at the repository root:
- `site.toml`: site identity, default scripts/styles, a `plugins` list (for example `["posts", "projects"]`), and output-specific settings under `outputs.*` (for example `outputs.html.remote_links` and `outputs.gemini.{exclude_public,recent_posts_limit,home_links}`), plus `projects_plugin` assets when that plugin is enabled.
- `projects.toml`: project listing entries using `[[projects]]`.
`Pressa.create_site` loads both files from the provided `source_path` and supports URL overrides for `debug`, `beta`, and `release` builds.
## Customizing for your site
If this workflow seems like a good fit, here is the minimum to make it your own:
- Update `site.toml` with your site identity (`author`, `email`, `title`, `description`, `url`) and any global `scripts` / `styles`.
- Set `plugins` in `site.toml` to explicitly enable features (`"posts"`, `"projects"`). Safe default if omitted is no plugins.
- Define your projects in `projects.toml` using `[[projects]]` entries with `name`, `title`, `description`, and `url`.
- Configure project-page-only assets in `site.toml` under `[projects_plugin]` (`scripts` and `styles`) when using the `"projects"` plugin.
- Configure output pipelines with `site.toml` `outputs.*` tables:
- `[outputs.html]` supports `remote_links` (array of `{label, href, icon}`).
- `[outputs.gemini]` supports `exclude_public`, `recent_posts_limit`, and `home_links` (array of `{label, href}`).
- Add custom plugins by implementing `Pressa::Plugin` in `lib/pressa/` and registering them in `lib/pressa/config/loader.rb`.
- Adjust rendering and layout in `lib/pressa/views/` and the static content in `public/` as needed.
Other targets:
```bash
bake mudge
bake beta
bake release
bake gemini
bake watch target=debug
bake clean
bake publish_beta
bake publish_gemini
bake publish
```
## Draft Workflow
```bash
bake new_draft "Post title"
bake drafts
bake publish_draft public/drafts/post-title.md
```
Published posts in `posts/YYYY/MM/*.md` require YAML front matter keys:
- `Title`
- `Author`
- `Date`
- `Timestamp`
## Tests And Lint
```bash
bake test
standardrb
```
Or via bake:
```bash
bake test
bake lint
bake lint_fix
```
## Notes
- `bake watch` is Linux-only and requires `inotifywait`.
- Deployment uses `rsync` to host `mudge` (configured in `bake.rb`):
- production: `/var/www/samhuri.net/public`
- beta: `/var/www/beta.samhuri.net/public`
- gemini: `/var/gemini/samhuri.net`

22
TODO Normal file
View file

@ -0,0 +1,22 @@
TODO
====
* bookmarklet for posting to blog
* comment admin
* comment notification
* promote JS links on project pages (the only ones that mention javascript so far!)
* last commit date
* svg commit graph
* semantic markup (section, nav, header, footer, etc)
* rss -> atom
* announce posts on twitter
* finish recovering old posts and comments

179
assets/css/blog.css Normal file
View file

@ -0,0 +1,179 @@
body { margin: 0
; padding: 0
}
h1 { margin: 0
; padding: 0.2em
; color: #9ab
}
.center { text-align: center
; font-size: 1.2em
}
.hidden { display: none }
#index { width: 80%
; min-width: 300px
; max-width: 800px
; border: solid 1px #999
; -moz-border-radius: 10px
; -webkit-border-radius: 10px
; border-radius: 10px
; background-color: #eee
; margin: 1em auto
; padding: 1em
; font-size: 1.2em
; line-height: 1.5em
; list-style-type: none
}
.date { float: right }
#article,
article { width: 80%
; min-width: 400px
; max-width: 800px
; margin: 0.6em auto
; font-size: 1.2em
; line-height: 1.4em
; color: #222
}
#article h1,
article h1 { text-align: left
; font-size: 2em
; line-height: 1.2em
; font-weight: normal
; color: #222
; margin: 0
; padding-left: 0
}
#article h1 a,
article h1 a { color: #222
; text-decoration: underline
; border-bottom: none
; text-shadow: #ccc 1px 1px 5px
; -webkit-transition: text-shadow 0.4s ease-in
}
#article h1 a:hover,
article h1 a:hover { text-shadow: 1px 1px 6px #ffc
; color: #000
}
#article h2,
article h2 { font-size: 1.8em
; font-weight: normal
; text-align: left
; margin: 1em 0
; padding: 0
; color: #222
}
#article h3,
article h3 { font-size: 1.6em
; font-weight: normal
}
.time,
time { color: #444
; font-size: 1.2em
}
.permalink { font-size: 1em }
.gist { font-size: 0.8em }
/* show discussion */
#sd-container { margin: 3em 0 }
input[type=submit],
#sd { border: solid 1px #999
; border-right-color: #333
; border-bottom-color: #333
; padding: 0.4em 1em
; color: #444
; background-color: #ececec
; -moz-border-radius: 5px
; -webkit-border-radius: 5px
; border-radius: 5px
; text-decoration: none
; margin: 0 2px 2px 0
}
input[type=submit]:active,
#sd:active { margin: 2px 0 0 2px
; color: #000
; background-color: #ffc
}
#comment-stuff { display: none
; color: #efefef
; margin: 0
; padding: 2em 0
}
#comments-spinner { text-align: center }
#comments { width: 70%
; max-width: 600px
; margin: 0 auto
}
.comment { color: #555
; border-top: solid 2px #ccc
; padding-bottom: 2em
; margin-bottom: 2em
}
.comment big { font-size: 2em
; font-family: Verdana, sans-serif
}
#comment-form { width: 400px
; margin: 2em auto 0
}
input[type=text],
textarea { font-size: 1.4em
; color: #333
; width: 100%
; padding: 0.2em
; border: solid 1px #999
; -moz-border-radius: 5px
; -webkit-border-radius: 5px
; border-radius: 5px
; font-family: verdana, sans-serif
}
input:focus[type=text],
textarea:focus { border: solid 1px #333 }
textarea { height: 100px }
input[type=submit] { font-size: 1.1em
; cursor: pointer
}
pre { background-color: #eeeef3
; margin: 0.5em 1em 1em
; padding: 0.5em
; border: dashed 1px #ccc
}
footer { margin: 0 auto
; padding: 0.2em 0
; border-top: solid 1px #ddd
; clear: both
; width: 80%
}
footer p { margin: 0.5em }
footer a { border-bottom: none
; color: #25c
; font-size: 1.2em
; text-decoration: none
}

7
assets/css/ie6.css Normal file
View file

@ -0,0 +1,7 @@
ul { behavior: none
; padding-bottom: 25px
}
img { behavior: url(../js/iepngfix.htc)
; behavior: url(../../js/iepngfix.htc)
}

1
assets/css/ie7.css Normal file
View file

@ -0,0 +1 @@
ul#projects li { list-style-type: none }

140
assets/css/mobile.css Normal file
View file

@ -0,0 +1,140 @@
/* phones and iPad */
@media only screen and (orientation: portrait) and (min-device-width: 768px) and (max-device-width: 1024px),
only screen and (min-device-width: 320px) and (max-device-width: 480px),
only screen and (max-device-width: 800px)
{
ul.nav { padding: 0.5em
; width: 60%
; max-width: 600px
}
ul.nav li { display: block
; font-size: 1.5em
; line-height: 1.8em
}
ul.nav li:after { content: '' }
}
/* phones */
@media only screen and (min-device-width: 320px) and (max-device-width: 480px),
handheld and (max-device-width: 800px)
{
/* common */
h1 { font-size: 2em
; margin-top: 0.5em
}
h2 { font-size: 1.3em; line-height: 1.2em }
.navbar { font-size: 0.9em }
.navbar { width: 32% }
#breadcrumbs { margin-left: 5px }
#show-posts { margin-top: 1em
; font-size: 0.8em
}
#forkme { display: none }
ul.nav { width: 80% }
ul.nav li { font-size: 1.4em
; line-height: 1.6em
}
td { font-size: 1em
; line-height: 1.1em
}
#blog img { max-width: 100% }
#index { width: 90%
; min-width: 200px
; margin: 0.3em auto 1em
; padding: 0.5em
; font-size: 1em
}
#index li > span.date { display: block
; float: none
; color: #666
; font-size: 0.8em
}
#blog #article h1,
#blog article h1 { font-size: 1.6em
; line-height: 1.2em
; margin-top: 0
}
#blog article h2 { font-size: 1.4em }
#article,
article { min-width: 310px
; margin: 0
; padding: 0.6em 0.4em
; font-size: 0.8em
}
.time,
time { font-size: 1.0em }
pre, .gist { font-size: 0.8em }
#comment-stuff { padding: 0
; margin-top: 2em
}
#comments { width: 100% }
#comment-form { width: 90%
; margin: 0 auto
}
input[type=text],
textarea { font-size: 1.2em
; width: 95%
}
input[type=submit] { font-size: 1em }
/* proj */
#info { width: 70%
; padding: 0 1em
}
#info > div { clear: left
; width: 100%
; max-width: 100%
; padding: 0.5em 0.2em 1em
; border-left: none
; font-size: 1em
}
#stats { font-size: 1em; margin-bottom: 0.5em }
footer { margin: 0
; padding: 0.5em 0
; font-size: 1em
; width: 100%
}
}
/* landscape */
@media only screen and (orientation: landscape) and (min-device-width: 768px) and (max-device-width: 1024px),
only screen and (orientation: landscape) and (min-device-width: 320px) and (max-device-width: 480px),
handheld and (orientation: landscape) and (max-device-width: 800px)
{
body { font-size: 0.8em }
}
/* iPad portrait */
@media only screen and (orientation: portrait) and (min-device-width: 768px) and (max-device-width: 1024px)
{
article > header > h1 { font-size: 1.8em }
}

View file

@ -0,0 +1,7 @@
td { font-size: 1.5em
; line-height: 1.6em
}
td:nth-child(2) { padding: 0 10px }
.highlight { font-size: 1.2em }

43
assets/css/proj.css Normal file
View file

@ -0,0 +1,43 @@
#stats a { text-decoration: none }
#info { text-align: center
; margin: 1em auto
; padding: 1em
; border: solid 1px #ccc
; width: 90%
; max-width: 950px
; background-color: #fff
; -moz-border-radius: 20px
; -webkit-border-radius: 20px
; border-radius: 20px
; behavior: url(../js/border-radius.htc)
; behavior: url(../../js/border-radius.htc)
}
h4 { margin: 0.5em 0 0.7em }
#info > div { text-align: center
; font-size: 1.3em
; width: 31%
; max-width: 400px
; float: left
; display: inline
; padding: 0.5em 0.2em
; border-left: dashed 1px #aaa
}
#info > div:first-child { border-left: none }
#info ul { list-style-type: none
; text-align: center
; padding: 0
; margin: 0
}
#info li { padding: 0.2em 0
; margin: 0
}
#info > br.clear { clear: both }
#contributors-box a { line-height: 1.8em }

90
assets/css/style.css Normal file
View file

@ -0,0 +1,90 @@
body { background-color: #f7f7f7
; color: #222
; font-family: 'Helvetica Neue', Verdana, sans-serif
}
h1 { text-align: center
; font-size: 2em
; font-weight: normal
; margin: 0.8em 0 0.4em
; padding: 0
}
h2 { text-align: center
; font-size: 1.7em
; line-height: 1.1em
; font-weight: normal
; margin: 0.2em 0 1em
; padding: 0
}
a { color: #0E539C }
a.img { border: none }
.navbar { display: inline-block
; width: 33%
; font-size: 1.5em
; line-height: 1.8em
; margin: 0
; padding: 0
}
.navbar a { text-shadow: none }
#breadcrumbs a { color: #222 }
#title { text-align: center }
#archive { text-align: right }
#forkme { position: absolute
; top: 0
; right: 0
; border: none
}
ul.nav { text-align: center
; max-width: 400px
; margin: 0 auto
; padding: 1em
; border: solid 1px #ccc
; background-color: #fff
; -moz-border-radius: 20px
; -webkit-border-radius: 20px
; border-radius: 20px
; behavior: url(js/border-radius.htc)
; behavior: url(../js/border-radius.htc)
}
ul.nav li { display: block
; font-size: 1.6em
; line-height: 1.8em
; margin: 0
; padding: 0
}
ul.nav li a { padding: 5px
; text-decoration: none
; border-bottom: solid 1px #fff
; text-shadow: #ccc 2px 2px 3px
}
ul.nav li a:visited { color: #227 }
ul.nav li a:hover,
ul.nav li a:active { text-shadow: #cca 2px 2px 3px
; border-bottom: solid 1px #aaa
}
ul.nav li a:active { text-shadow: none }
footer { text-align: center
; font-size: 1.2em
; margin: 1em
}
footer a { border-bottom: none }
#promote-js { margin-top: 3em
; text-align: center
}
#promote-js img { border: none }

133
assets/js/blog.js Normal file
View file

@ -0,0 +1,133 @@
;(function() {
if (typeof console === 'undefined')
window.console = {}
if (typeof console.log !== 'function')
window.console.log = function(){}
if (typeof console.dir !== 'function')
window.console.dir = function(){}
var server = 'http://bohodev.net:8000/'
, getCommentsURL = function(post) { return server + 'comments/' + post }
, postCommentURL = function() { return server + 'comment' }
, countCommentsURL = function(post) { return server + 'count/' + post }
function getComments(cb) {
SJS.request({uri: getCommentsURL(SJS.filename)}, function(err, request, body) {
if (err) {
if (typeof cb === 'function') cb(err)
return
}
var data
, comments
, h = ''
try {
data = JSON.parse(body)
}
catch (e) {
console.log('not json -> ' + body)
return
}
comments = data.comments
if (comments && comments.length) {
h = data.comments.map(function(c) {
return tmpl('comment_tmpl', c)
}).join('')
$('#comments').html(h)
}
if (typeof cb === 'function') cb()
})
}
function showComments(cb) {
$('#sd-container').remove()
getComments(function(err) {
$('#comments-spinner').hide()
if (err) {
$('#comments').text('derp')
if (typeof cb === 'function') cb(err)
}
else {
$('#comment-stuff').slideDown(1.5, function() {
if (typeof cb === 'function') cb()
else this.scrollIntoView(true)
})
}
})
}
jQuery(function($) {
$('#need-js').remove()
SJS.request({uri: countCommentsURL(SJS.filename)}, function(err, request, body) {
if (err) return
var data
, n
try {
data = JSON.parse(body)
}
catch (e) {
console.log('not json -> ' + body)
return
}
n = data.count
$('#sd').text(n > 0 ? 'show the discussion (' + n + ')' : 'start the discussion')
})
// jump to comment if linked directly
var hash = window.location.hash || ''
if (/^#comment-\d+/.test(hash)) {
showComments(function (err) {
if (!err) {
window.location.hash = ''
window.location.hash = hash
}
})
}
$('#sd').click(showComments)
var showdown = new Showdown.converter()
, tzOffset = -new Date().getTimezoneOffset() * 60 * 1000
$('#comment-form').submit(function() {
var comment = $(this).serializeObject()
comment.name = (comment.name || '').trim() || 'anonymous'
comment.url = (comment.url || '').trim()
if (comment.url && !comment.url.match(/^https?:\/\//)) {
comment.url = 'http://' + comment.url
}
comment.body = comment.body || ''
if (!comment.body) {
alert("is that all you have to say?")
document.getElementById('thoughts').focus()
return false
}
comment.timestamp = +new Date() + tzOffset
var options = { method: 'POST'
, uri: postCommentURL()
, body: JSON.stringify(comment)
}
SJS.request(options, function(err, request, body) {
if (err) {
console.dir(err)
alert('derp')
return false
}
$('#comment-form').get(0).reset()
comment.timestamp = +new Date()
comment.html = showdown.makeHtml(comment.body)
comment.name = (comment.name || '').trim() || 'anonymous'
comment.url = (comment.url || '').trim()
if (comment.url && !comment.url.match(/^https?:\/\//)) {
comment.url = 'http://' + comment.url
}
$('#comments').append(tmpl('comment_tmpl', comment))
})
return false
})
})
}());

View file

@ -218,12 +218,16 @@
// Data is also available via the `data` property.
Resource.prototype.fetch = function(cb) {
if (this.data) {
cb.call(this, null, this.data)
cb(null, this.data)
}
else {
var self = this
fetch(this.path, function(err, data) {
if (!err) {
// console.log('FETCH', self.path, err, data)
if (err) {
// console.log(err)
}
else {
self.data = data
mixin(self, data)
}
@ -242,7 +246,11 @@
else {
var self = this
fetch(this.path + '/' + thing, function(err, data) {
if (!err) {
// console.log('FETCH SUBRESOURCE', self.path, thing, err, data)
if (err) {
// console.log(self.path, err)
}
else {
self['_' + thing] = data
}
if (typeof cb === 'function') {
@ -713,6 +721,7 @@
// bootstrap loader from LABjs (load is declared earlier)
load = function(url) {
var oDOC = document
, handler
, head = oDOC.head || oDOC.getElementsByTagName("head")
// loading code borrowed directly from LABjs itself

31
assets/js/jquery-serializeObject.js vendored Normal file
View file

@ -0,0 +1,31 @@
/*!
* jQuery serializeObject - v0.2 - 1/20/2010
* http://benalman.com/projects/jquery-misc-plugins/
*
* Copyright (c) 2010 "Cowboy" Ben Alman
* Dual licensed under the MIT and GPL licenses.
* http://benalman.com/about/license/
*/
// Whereas .serializeArray() serializes a form into an array, .serializeObject()
// serializes a form into an (arguably more useful) object.
;(function($,undefined){
'$:nomunge'; // Used by YUI compressor.
$.fn.serializeObject = function(){
var obj = {};
$.each( this.serializeArray(), function(i,o){
var n = o.name,
v = o.value;
obj[n] = obj[n] === undefined ? v
: $.isArray( obj[n] ) ? obj[n].concat( v )
: [ obj[n], v ];
});
return obj;
};
})(jQuery);

137
assets/js/proj.js Normal file
View file

@ -0,0 +1,137 @@
;(function() {
if (typeof console === 'undefined') {
console = {log:function(){}}
}
var global = this
global.SJS = {
proj: function(name) {
SJS.projName = name
var data = createObjectStore(name)
if (document.addEventListener) {
document.addEventListener('DOMContentLoaded', ready, false)
} else if (window.attachEvent) {
window.attachEvent('onload', ready)
}
function ready() {
function addClass(el, name) {
var c = el.className || name
if (!c.match(new RegExp('\b' + name + '\b', 'i'))) c += ' ' + name
}
function html(id, h) {
document.getElementById(id).innerHTML = h
}
var body = document.getElementsByTagName('body')[0]
, text
if ('innerText' in body) {
text = function(id, text) {
document.getElementById(id).innerText = text
}
} else {
text = function(id, text) {
document.getElementById(id).textContent = text
}
}
function highlight(id) {
document.getElementById(id).style.className = ' highlight'
}
function textHighlight(id, t) {
text(id, t)
document.getElementById(id).className = ' highlight'
}
function hide(id) {
document.getElementById(id).style.display = 'none'
}
function langsByUsage(langs) {
return Object.keys(langs).sort(function(a, b) {
return langs[a] < langs[b] ? -1 : 1
})
}
function listify(things) {
return '<ul><li>' + things.join('</li><li>') + '</li></ul>'
}
function updateBranches(name, branches) {
function branchLink(b) {
return '<a href=https://github.com/samsonjs/' + name + '/tree/' + b.name + '>' + b.name + '</a>'
}
html('branches', listify(branches.map(branchLink)))
}
function updateContributors(contributors) {
function userLink(u) {
return '<a href=https://github.com/' + u.login + '>' + (u.name || u.login) + '</a>'
}
html('contributors', listify(contributors.map(userLink)))
}
function updateLangs(langs) {
html('langs', listify(langsByUsage(langs)))
}
function updateN(name, things) {
textHighlight('n' + name, things.length)
if (things.length === 1) hide(name.charAt(0) + 'plural')
}
var t = data.get('t-' + name)
if (!t || +new Date() - t > 3600 * 1000) {
console.log('stale ' + String(t))
data.set('t-' + name, +new Date())
GITR.repo('samsonjs', name)
.fetchBranches(function(err, branches) {
if (err) {
text('branches', '(oops)')
} else {
data.set('branches', branches)
updateBranches(name, branches)
}
})
.fetchLanguages(function(err, langs) {
if (err) {
text('langs', '(oops)')
return
}
data.set('langs', langs)
updateLangs(langs)
})
.fetchContributors(function(err, users) {
if (err) {
text('contributors', '(oops)')
} else {
data.set('contributors', users)
updateContributors(users)
}
})
.fetchWatchers(function(err, users) {
if (err) {
text('nwatchers', '?')
} else {
data.set('watchers', users)
updateN('watchers', users)
}
})
.fetchForks(function(err, repos) {
if (err) {
text('nforks', '?')
} else {
data.set('forks', repos)
updateN('forks', repos)
}
})
} else {
console.log('hit ' + t + ' (' + (+new Date() - t) + ')')
updateBranches(name, data.get('branches'))
updateLangs(data.get('langs'))
updateContributors(data.get('contributors'))
updateN('watchers', data.get('watchers'))
updateN('forks', data.get('forks'))
}
}
}
}
}());

36
assets/js/request.js Normal file
View file

@ -0,0 +1,36 @@
;(function() {
if (typeof window.SJS === 'undefined') window.SJS = {}
// cors xhr request, quacks like mikeal's request module
window.SJS.request = function(options, cb) {
var url = options.uri
, method = options.method || 'GET'
, headers = options.headers || {}
, body = typeof options.body === 'undefined' ? null : String(options.body)
, xhr = new XMLHttpRequest()
// withCredentials => cors
if ('withCredentials' in xhr) {
xhr.open(method, url, true)
} else if (typeof XDomainRequest === 'function') {
xhr = new XDomainRequest()
xhr.open(method, url)
} else {
cb(new Error('cross domain requests not supported'))
return
}
for (var k in headers) if (headers.hasOwnProperty(k)) {
xhr.setRequestHeader(k, headers[k])
}
xhr.onload = function() {
if (xhr.status === 200 || xhr.status === 204) {
cb(null, xhr, xhr.responseText)
}
else {
console.log('xhr error ' + xhr.status + ': ' + xhr.responseText)
cb(new Error('error: ' + xhr.status))
}
}
xhr.send(body)
}
}());

1296
assets/js/showdown.js Normal file

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,92 @@
if (!window.localStorage || !window.sessionStorage) (function () {
var Storage = function (type) {
function createCookie(name, value, days) {
var date, expires;
if (days) {
date = new Date();
date.setTime(date.getTime()+(days*24*60*60*1000));
expires = "; expires="+date.toGMTString();
} else {
expires = "";
}
document.cookie = name+"="+value+expires+"; path=/";
}
function readCookie(name) {
var nameEQ = name + "=",
ca = document.cookie.split(';'),
i, c;
for (i=0; i < ca.length; i++) {
c = ca[i];
while (c.charAt(0)==' ') {
c = c.substring(1,c.length);
}
if (c.indexOf(nameEQ) == 0) {
return c.substring(nameEQ.length,c.length);
}
}
return null;
}
function setData(data) {
data = JSON.stringify(data);
if (type == 'session') {
window.top.name = data;
} else {
createCookie('localStorage', data, 365);
}
}
function clearData() {
if (type == 'session') {
window.top.name = '';
} else {
createCookie('localStorage', '', 365);
}
}
function getData() {
var data = type == 'session' ? window.top.name : readCookie('localStorage');
return data ? JSON.parse(data) : {};
}
// initialise if there's already data
var data = getData();
return {
clear: function () {
data = {};
clearData();
},
getItem: function (key) {
return data[key] || null;
},
key: function (i) {
// not perfect, but works
var ctr = 0;
for (var k in data) {
if (ctr == i) return k;
else ctr++;
}
return null;
},
removeItem: function (key) {
delete data[key];
setData(data);
},
setItem: function (key, value) {
data[key] = value+''; // forces the value to a string
setData(data);
}
};
};
if (!window.localStorage) window.localStorage = new Storage('local');
if (!window.sessionStorage) window.sessionStorage = new Storage('session');
}());

View file

@ -1,6 +1,7 @@
;(function() {
var global = this
if (typeof localStorage !== 'undefined') {
window.createObjectStore = function(namespace) {
global.createObjectStore = function(namespace) {
function makeKey(k) {
return '--' + namespace + '-' + (k || '')
}
@ -20,7 +21,9 @@
var val = localStorage[makeKey(key)]
try {
while (typeof val === 'string') val = JSON.parse(val)
} catch (e) {}
} catch (e) {
//console.log('string?')
}
return val
},
set: function(key, val) {
@ -31,10 +34,10 @@
}
}
}
window.ObjectStore = createObjectStore('default')
global.ObjectStore = createObjectStore('default')
} else {
// Create an in-memory store, should probably fall back to cookies
window.createObjectStore = function() {
global.createObjectStore = function() {
var store = {}
return {
clear: function() { store = {} },
@ -43,6 +46,6 @@
remove: function(key) { delete store[key] }
}
}
window.ObjectStore = createObjectStore()
global.ObjectStore = createObjectStore()
}
}());
}());

83
assets/js/strftime.js Normal file
View file

@ -0,0 +1,83 @@
/// strftime
/// http://github.com/samsonjs/strftime
/// @_sjs
///
/// Copyright 2010 Sami Samhuri <sami@samhuri.net>
/// MIT License
var strftime = (function() {
var Weekdays = ['Sunday', 'Monday', 'Tuesday', 'Wednesday', 'Thursday',
'Friday', 'Saturday'];
var WeekdaysShort = ['Sun', 'Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat'];
var Months = ['January', 'February', 'March', 'April', 'May', 'June', 'July',
'August', 'September', 'October', 'November', 'December'];
var MonthsShort = ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug',
'Sep', 'Oct', 'Nov', 'Dec'];
function pad(n, padding) {
padding = padding || '0';
return n < 10 ? (padding + n) : n;
}
function hours12(d) {
var hour = d.getHours();
if (hour == 0) hour = 12;
else if (hour > 12) hour -= 12;
return hour;
}
// Most of the specifiers supported by C's strftime
function strftime(fmt, d) {
d || (d = new Date());
return fmt.replace(/%(.)/g, function(_, c) {
switch (c) {
case 'A': return Weekdays[d.getDay()];
case 'a': return WeekdaysShort[d.getDay()];
case 'B': return Months[d.getMonth()];
case 'b': // fall through
case 'h': return MonthsShort[d.getMonth()];
case 'D': return strftime('%m/%d/%y', d);
case 'd': return pad(d.getDate());
case 'e': return d.getDate();
case 'F': return strftime('%Y-%m-%d', d);
case 'H': return pad(d.getHours());
case 'I': return pad(hours12(d));
case 'k': return pad(d.getHours(), ' ');
case 'l': return pad(hours12(d), ' ');
case 'M': return pad(d.getMinutes());
case 'm': return pad(d.getMonth() + 1);
case 'n': return '\n';
case 'p': return d.getHours() < 12 ? 'AM' : 'PM';
case 'R': return strftime('%H:%M', d);
case 'r': return strftime('%I:%M:%S %p', d);
case 'S': return pad(d.getSeconds());
case 's': return d.getTime();
case 'T': return strftime('%H:%M:%S', d);
case 't': return '\t';
case 'u':
var day = d.getDay();
return day == 0 ? 7 : day; // 1 - 7, Monday is first day of the week
case 'v': return strftime('%e-%b-%Y', d);
case 'w': return d.getDay(); // 0 - 6, Sunday is first day of the week
case 'Y': return d.getFullYear();
case 'y':
var year = d.getYear();
return year < 100 ? year : year - 100;
case 'Z':
var tz = d.toString().match(/\((\w+)\)/);
return tz && tz[1] || '';
case 'z':
var off = d.getTimezoneOffset();
return (off < 0 ? '-' : '+') + pad(off / 60) + pad(off % 60);
default: return c;
}
});
}
return strftime;
}());
if (typeof exports !== 'undefined') exports.strftime = strftime;
else (function(global) { global.strftime = strftime }(this));

35
assets/js/tmpl.js Normal file
View file

@ -0,0 +1,35 @@
// Simple JavaScript Templating
// John Resig - http://ejohn.org/ - MIT Licensed
;(function(){
var cache = {};
this.tmpl = function tmpl(str, data){
// Figure out if we're getting a template, or if we need to
// load the template - and be sure to cache the result.
var fn = !/\W/.test(str) ?
cache[str] = cache[str] ||
tmpl(document.getElementById(str).innerHTML) :
// Generate a reusable function that will serve as a template
// generator (and which will be cached).
new Function("obj",
"var p=[],print=function(){p.push.apply(p,arguments);};" +
// Introduce the data as local variables using with(){}
"with(obj){p.push('" +
// Convert the template into pure JavaScript
str
.replace(/[\r\t\n]/g, " ")
.split("<%").join("\t")
.replace(/((^|%>)[^\t]*)'/g, "$1\r")
.replace(/\t=(.*?)%>/g, "',$1,'")
.split("\t").join("');")
.split("%>").join("p.push('")
.split("\r").join("\\'")
+ "');}return p.join('');");
// Provide some basic currying to the user
return data ? fn( data ) : fn;
};
}());

500
bake.rb
View file

@ -1,500 +0,0 @@
# Build tasks for samhuri.net static site generator
require "etc"
require "fileutils"
require "open3"
require "tmpdir"
LIB_PATH = File.expand_path("lib", __dir__).freeze
$LOAD_PATH.unshift(LIB_PATH) unless $LOAD_PATH.include?(LIB_PATH)
DRAFTS_DIR = "public/drafts".freeze
PUBLISH_HOST = "mudge".freeze
PRODUCTION_PUBLISH_DIR = "/var/www/samhuri.net/public".freeze
BETA_PUBLISH_DIR = "/var/www/beta.samhuri.net/public".freeze
GEMINI_PUBLISH_DIR = "/var/gemini/samhuri.net".freeze
WATCHABLE_DIRECTORIES = %w[public posts lib].freeze
LINT_TARGETS = %w[bake.rb Gemfile lib test].freeze
BUILD_TARGETS = %w[debug mudge beta release gemini].freeze
# Generate the site in debug mode (localhost:8000)
def debug
build("http://localhost:8000", output_format: "html", target_path: "www")
end
# Generate the site for the mudge development server
def mudge
build("http://mudge:8000", output_format: "html", target_path: "www")
end
# Generate the site for beta/staging
def beta
build("https://beta.samhuri.net", output_format: "html", target_path: "www")
end
# Generate the site for production
def release
build("https://samhuri.net", output_format: "html", target_path: "www")
end
# Generate the Gemini capsule for production
def gemini
build("https://samhuri.net", output_format: "gemini", target_path: "gemini")
end
# Start local development server
def serve
require "webrick"
server = WEBrick::HTTPServer.new(Port: 8000, DocumentRoot: "www")
trap("INT") { server.shutdown }
puts "Server running at http://localhost:8000"
server.start
end
# Create a new draft in public/drafts/.
# @parameter title_parts [Array] Optional title words; defaults to Untitled.
def new_draft(*title_parts)
title, filename =
if title_parts.empty?
["Untitled", next_available_draft]
else
given_title = title_parts.join(" ")
slug = slugify(given_title)
abort "Error: title cannot be converted to a filename." if slug.empty?
filename = "#{slug}.md"
path = draft_path(filename)
abort "Error: draft already exists at #{path}" if File.exist?(path)
[given_title, filename]
end
FileUtils.mkdir_p(DRAFTS_DIR)
path = draft_path(filename)
content = render_draft_template(title)
File.write(path, content)
puts "Created new draft at #{path}"
puts ">>> Contents below <<<"
puts
puts content
end
# Publish a draft by moving it to posts/YYYY/MM and updating dates.
# @parameter input_path [String] Draft path or filename in public/drafts.
def publish_draft(input_path = nil)
if input_path.nil? || input_path.strip.empty?
puts "Usage: bake publish_draft <draft-path-or-filename>"
puts
puts "Available drafts:"
drafts = Dir.glob("#{DRAFTS_DIR}/*.md").map { |path| File.basename(path) }
if drafts.empty?
puts " (no drafts found)"
else
drafts.each { |draft| puts " #{draft}" }
end
abort
end
draft_path_value, draft_file = resolve_draft_input(input_path)
abort "Error: File not found: #{draft_path_value}" unless File.exist?(draft_path_value)
now = Time.now
content = File.read(draft_path_value)
content.sub!(/^Date:.*$/, "Date: #{ordinal_date(now)}")
content.sub!(/^Timestamp:.*$/, "Timestamp: #{now.strftime("%Y-%m-%dT%H:%M:%S%:z")}")
target_dir = "posts/#{now.strftime("%Y/%m")}"
FileUtils.mkdir_p(target_dir)
target_path = "#{target_dir}/#{draft_file}"
File.write(target_path, content)
FileUtils.rm_f(draft_path_value)
puts "Published draft: #{draft_path_value} -> #{target_path}"
end
# Watch content directories and rebuild on every change.
# @parameter target [String] One of debug, mudge, beta, release, or gemini.
def watch(target: "debug")
unless command_available?("inotifywait")
abort "inotifywait is required (install inotify-tools)."
end
loop do
abort "Error: watch failed." unless system("inotifywait", "-e", "modify,create,delete,move", *watch_paths)
puts "changed at #{Time.now}"
sleep 2
run_build_target(target)
end
end
# Publish to beta/staging server
def publish_beta
beta
run_rsync(local_paths: ["www/"], publish_dir: BETA_PUBLISH_DIR, dry_run: false, delete: true)
end
# Publish Gemini capsule to production
def publish_gemini
gemini
run_rsync(local_paths: ["gemini/"], publish_dir: GEMINI_PUBLISH_DIR, dry_run: false, delete: true)
end
# Publish to production server
def publish
release
run_rsync(local_paths: ["www/"], publish_dir: PRODUCTION_PUBLISH_DIR, dry_run: false, delete: true)
publish_gemini
end
# Clean generated files
def clean
FileUtils.rm_rf("www")
FileUtils.rm_rf("gemini")
puts "Cleaned www/ and gemini/ directories"
end
# Default task: run coverage and lint.
def default
coverage
lint
end
# Run Minitest tests
def test
run_test_suite(test_file_list)
end
# Run Guard for continuous testing
def guard
exec "bundle exec guard"
end
# List all available drafts
def drafts
Dir.glob("#{DRAFTS_DIR}/*.md").sort.each do |draft|
puts File.basename(draft)
end
end
# Run StandardRB linter
def lint
run_standardrb
end
# Auto-fix StandardRB issues
def lint_fix
run_standardrb("--fix")
end
# Measure line coverage for files under lib/.
# @parameter lowest [Integer] Number of lowest-covered files to print (default: 10, use 0 to hide).
def coverage(lowest: 10)
lowest_count = Integer(lowest)
abort "Error: lowest must be >= 0." if lowest_count.negative?
run_coverage(test_files: test_file_list, lowest_count:)
end
# Compare line coverage for files under lib/ against a baseline and fail on regression.
# @parameter baseline [String] Baseline ref, or "merge-base" (default) to compare against merge-base with remote default branch.
# @parameter lowest [Integer] Number of lowest-covered files to print for the current checkout (default: 10, use 0 to hide).
def coverage_regression(baseline: "merge-base", lowest: 10)
lowest_count = Integer(lowest)
abort "Error: lowest must be >= 0." if lowest_count.negative?
baseline_ref = resolve_coverage_baseline_ref(baseline)
baseline_commit = capture_command("git", "rev-parse", "--short", baseline_ref).strip
puts "Running coverage for current checkout..."
current_output = capture_coverage_output(test_files: test_file_list, lowest_count:, chdir: Dir.pwd)
print current_output
current_percent = parse_coverage_percent(current_output)
puts "Running coverage for baseline #{baseline_ref} (#{baseline_commit})..."
baseline_percent = with_temporary_worktree(ref: baseline_ref) do |worktree_path|
baseline_tests = test_file_list(chdir: worktree_path)
baseline_output = capture_coverage_output(test_files: baseline_tests, lowest_count: 0, chdir: worktree_path)
parse_coverage_percent(baseline_output)
end
delta = current_percent - baseline_percent
puts format("Baseline coverage (%s %s): %.2f%%", baseline_ref, baseline_commit, baseline_percent)
puts format("Coverage delta: %+0.2f%%", delta)
return unless delta.negative?
abort format("Error: coverage regressed by %.2f%% against %s (%s).", -delta, baseline_ref, baseline_commit)
end
private
def run_test_suite(test_files)
run_command("ruby", "-Ilib", "-Itest", "-e", "ARGV.each { |file| require File.expand_path(file) }", *test_files)
end
def run_coverage(test_files:, lowest_count:)
output = capture_coverage_output(test_files:, lowest_count:, chdir: Dir.pwd)
print output
end
def test_file_list(chdir: Dir.pwd)
test_files = Dir.chdir(chdir) { Dir.glob("test/**/*_test.rb").sort }
abort "Error: no tests found in test/**/*_test.rb under #{chdir}" if test_files.empty?
test_files
end
def coverage_script(lowest_count:)
<<~RUBY
require "coverage"
root = Dir.pwd
lib_root = File.join(root, "lib") + "/"
Coverage.start(lines: true)
at_exit do
result = Coverage.result
rows = result.keys
.select { |file| file.start_with?(lib_root) && file.end_with?(".rb") }
.sort
.map do |file|
lines = result[file][:lines] || []
total = 0
covered = 0
lines.each do |line_count|
next if line_count.nil?
total += 1
covered += 1 if line_count.positive?
end
percent = total.zero? ? 100.0 : (covered.to_f / total * 100)
[file, covered, total, percent]
end
covered_lines = rows.sum { |row| row[1] }
total_lines = rows.sum { |row| row[2] }
overall_percent = total_lines.zero? ? 100.0 : (covered_lines.to_f / total_lines * 100)
puts format("Coverage (lib): %.2f%% (%d / %d lines)", overall_percent, covered_lines, total_lines)
unless #{lowest_count}.zero? || rows.empty?
puts "Lowest covered files:"
rows.sort_by { |row| row[3] }.first(#{lowest_count}).each do |file, covered, total, percent|
relative_path = file.delete_prefix(root + "/")
puts format(" %6.2f%% %d/%d %s", percent, covered, total, relative_path)
end
end
end
ARGV.each { |file| require File.expand_path(file) }
RUBY
end
def capture_coverage_output(test_files:, lowest_count:, chdir:)
capture_command("ruby", "-Ilib", "-Itest", "-e", coverage_script(lowest_count:), *test_files, chdir:)
end
def parse_coverage_percent(output)
match = output.match(/Coverage \(lib\):\s+([0-9]+\.[0-9]+)%/)
abort "Error: unable to parse coverage output." unless match
Float(match[1])
end
def resolve_coverage_baseline_ref(baseline)
baseline_name = baseline.to_s.strip
abort "Error: baseline cannot be empty." if baseline_name.empty?
return coverage_merge_base_ref if baseline_name == "merge-base"
baseline_name
end
def coverage_merge_base_ref
remote = preferred_remote
remote_head_ref = remote_default_branch_ref(remote)
merge_base = capture_command("git", "merge-base", "HEAD", remote_head_ref).strip
abort "Error: could not resolve merge-base with #{remote_head_ref}." if merge_base.empty?
merge_base
end
def preferred_remote
upstream = capture_command_optional("git", "rev-parse", "--abbrev-ref", "--symbolic-full-name", "@{upstream}").strip
upstream_remote = upstream.split("/").first unless upstream.empty?
return upstream_remote if upstream_remote && !upstream_remote.empty?
remotes = capture_command("git", "remote").lines.map(&:strip).reject(&:empty?)
abort "Error: no git remotes configured; pass baseline=<ref>." if remotes.empty?
remotes.include?("origin") ? "origin" : remotes.first
end
def remote_default_branch_ref(remote)
symbolic = capture_command_optional("git", "symbolic-ref", "--quiet", "refs/remotes/#{remote}/HEAD").strip
if symbolic.empty?
fallback = "#{remote}/main"
capture_command("git", "rev-parse", "--verify", fallback)
return fallback
end
symbolic.sub("refs/remotes/", "")
end
def with_temporary_worktree(ref:)
temp_root = Dir.mktmpdir("coverage-baseline-")
worktree_path = File.join(temp_root, "worktree")
run_command("git", "worktree", "add", "--detach", worktree_path, ref)
begin
yield worktree_path
ensure
system("git", "worktree", "remove", "--force", worktree_path)
FileUtils.rm_rf(temp_root)
end
end
def capture_command(*command, chdir: Dir.pwd)
stdout, stderr, status = Dir.chdir(chdir) { Open3.capture3(*command) }
output = +""
output << stdout unless stdout.empty?
output << stderr unless stderr.empty?
abort "Error: command failed: #{command.join(" ")}\n#{output}" unless status.success?
output
end
def capture_command_optional(*command, chdir: Dir.pwd)
stdout, stderr, status = Dir.chdir(chdir) { Open3.capture3(*command) }
return stdout if status.success?
return "" if stderr.include?("no upstream configured") || stderr.include?("is not a symbolic ref")
""
end
# Build the site with specified URL and output format.
# @parameter url [String] The site URL to use.
# @parameter output_format [String] One of html or gemini.
# @parameter target_path [String] Target directory for generated output.
def build(url, output_format:, target_path:)
require "pressa"
puts "Building #{output_format} site for #{url}..."
site = Pressa.create_site(source_path: ".", url_override: url, output_format:)
generator = Pressa::SiteGenerator.new(site:)
generator.generate(source_path: ".", target_path:)
puts "Site built successfully in #{target_path}/"
end
def run_build_target(target)
target_name = target.to_s
unless BUILD_TARGETS.include?(target_name)
abort "Error: invalid target '#{target_name}'. Use one of: #{BUILD_TARGETS.join(", ")}"
end
public_send(target_name)
end
def watch_paths
WATCHABLE_DIRECTORIES.flat_map { |path| ["-r", path] }
end
def standardrb_command(*extra_args)
["bundle", "exec", "standardrb", *extra_args, *LINT_TARGETS]
end
def run_standardrb(*extra_args)
run_command(*standardrb_command(*extra_args))
end
def run_command(*command)
abort "Error: command failed: #{command.join(" ")}" unless system(*command)
end
def run_rsync(local_paths:, publish_dir:, dry_run:, delete:)
command = ["rsync", "-aKv", "-e", "ssh -4"]
command << "--dry-run" if dry_run
command << "--delete" if delete
command.concat(local_paths)
command << "#{PUBLISH_HOST}:#{publish_dir}"
abort "Error: rsync failed." unless system(*command)
end
def resolve_draft_input(input_path)
if input_path.include?("/")
if input_path.start_with?("posts/")
abort "Error: '#{input_path}' is already published in posts/ directory"
end
[input_path, File.basename(input_path)]
else
[draft_path(input_path), input_path]
end
end
def draft_path(filename)
File.join(DRAFTS_DIR, filename)
end
def slugify(title)
title.downcase
.gsub(/[^a-z0-9\s-]/, "")
.gsub(/\s+/, "-").squeeze("-")
.gsub(/^-|-$/, "")
end
def next_available_draft(base_filename = "untitled.md")
return base_filename unless File.exist?(draft_path(base_filename))
name_without_ext = File.basename(base_filename, ".md")
counter = 1
loop do
numbered_filename = "#{name_without_ext}-#{counter}.md"
return numbered_filename unless File.exist?(draft_path(numbered_filename))
counter += 1
end
end
def render_draft_template(title)
now = Time.now
<<~FRONTMATTER
---
Author: #{current_author}
Title: #{title}
Date: unpublished
Timestamp: #{now.strftime("%Y-%m-%dT%H:%M:%S%:z")}
Tags:
---
# #{title}
TKTK
FRONTMATTER
end
def current_author
Etc.getlogin || ENV["USER"] || `whoami`.strip
rescue
ENV["USER"] || `whoami`.strip
end
def ordinal_date(time)
day = time.day
suffix = case day
when 1, 21, 31
"st"
when 2, 22
"nd"
when 3, 23
"rd"
else
"th"
end
time.strftime("#{day}#{suffix} %B, %Y")
end
def command_available?(command)
system("which", command, out: File::NULL, err: File::NULL)
end

262
bin/blog.rb Executable file
View file

@ -0,0 +1,262 @@
#!/usr/bin/env ruby
# encoding: utf-8
require 'time'
require 'rubygems'
require 'bundler/setup'
require 'builder'
require 'json'
require 'mustache'
require 'rdiscount'
DefaultKeywords = ['sjs', 'sami samhuri', 'sami', 'samhuri', 'samhuri.net', 'blog']
ShortURLCodeSet = '0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ'
ShortURLBase = ShortURLCodeSet.length.to_f
def main
srcdir = ARGV.shift.to_s
destdir = ARGV.shift.to_s
Dir.mkdir(destdir) unless File.exists?(destdir)
unless File.directory?(srcdir)
puts 'usage: blog.rb <source dir> <dest dir>'
exit 1
end
b = Blag.new srcdir, destdir
puts 'title: ' + b.title
puts 'subtitle: ' + b.subtitle
puts 'url: ' + b.url
puts "#{b.posts.size} posts"
b.generate!
puts 'done blog'
end
class Blag
attr_accessor :title, :subtitle, :url
def self.go! src, dest
self.new(src, dest).generate!
end
def initialize src, dest
@src = src
@dest = dest
@blog_dest = File.join(dest, 'blog')
@css_dest = File.join(dest, 'css')
read_blog
end
def generate!
generate_posts
generate_index
generate_rss
generate_posts_json
generate_archive
generate_short_urls
copy_assets
end
def generate_index
# generate landing page
index_template = File.read(File.join('templates', 'blog', 'index.html'))
post = posts.first
values = { :post => post,
:styles => post[:styles],
:article => html(post),
:previous => posts[1],
:filename => post[:filename],
:url => post[:relative_url],
:comments => post[:comments]
}
index_html = Mustache.render(index_template, values)
File.open(File.join(@blog_dest, 'index.html'), 'w') {|f| f.puts(index_html) }
end
def generate_posts
page_template = File.read(File.join('templates', 'blog', 'post.html'))
posts.each_with_index do |post, i|
values = { :title => post[:title],
:link => post[:link],
:styles => post[:styles],
:article => html(post),
:previous => i < posts.length - 1 && posts[i + 1],
:next => i > 0 && posts[i - 1],
:filename => post[:filename],
:url => post[:relative_url],
:comments => post[:comments],
:keywords => (DefaultKeywords + post[:tags]).join(',')
}
post[:html] = Mustache.render(page_template, values)
File.open(File.join(@blog_dest, post[:filename]), 'w') {|f| f.puts(post[:html]) }
end
end
def generate_posts_json
json = JSON.generate({ :published => posts.map {|p| p[:filename]} })
File.open(File.join(@blog_dest, 'posts.json'), 'w') { |f| f.puts(json) }
end
def generate_archive
archive_template = File.read(File.join('templates', 'blog', 'archive.html'))
html = Mustache.render(archive_template, :posts => posts)
File.open(File.join(@blog_dest, 'archive'), 'w') { |f| f.puts(html) }
end
def generate_rss
# posts rss
File.open(rss_file, 'w') { |f| f.puts(rss_for_posts.target!) }
end
def generate_short_urls
htaccess = ['RewriteEngine on', 'RewriteRule ^$ http://samhuri.net [R=301,L]']
posts.reverse.each_with_index do |post, i|
code = shorten(i + 1)
htaccess << "RewriteRule ^#{code}$ #{post[:url]} [R=301,L]"
end
File.open(File.join(@dest, 's42', '.htaccess'), 'w') do |f|
f.puts(htaccess)
end
end
def shorten(n)
short = ''
while n > 0
short = ShortURLCodeSet[n % ShortURLBase, 1] + short
n = (n / ShortURLBase).floor
end
short
end
def copy_assets
Dir[File.join(@src, 'css', '*.css')].each do |stylesheet|
minified = File.join(@css_dest, File.basename(stylesheet).sub('.css', '.min.css'))
`yui-compressor #{stylesheet} #{minified}`
end
Dir[File.join(@src, 'files', '*')].each do |file|
FileUtils.copy(file, File.join(@dest, 'f', File.basename(file)))
end
Dir[File.join(@src, 'images', '*')].each do |file|
FileUtils.copy(file, File.join(@dest, 'images', 'blog', File.basename(file)))
end
end
def posts
prefix = File.join(@src, 'published') + '/'
@posts ||= Dir[File.join(prefix, '*')].sort.reverse.map do |filename|
lines = File.readlines(filename)
post = { :filename => filename.sub(prefix, '').sub(/\.(html|m(ark)?d(own)?)$/i, '') }
loop do
line = lines.shift.strip
m = line.match(/^(\w+):/)
if m && param = m[1].downcase
post[param.to_sym] = line.sub(Regexp.new('^' + param + ':\s*', 'i'), '').strip
elsif line.match(/^----\s*$/)
lines.shift while lines.first.strip.empty?
break
else
puts "ignoring unknown header: #{line}"
end
end
post[:type] = post[:link] ? :link : :post
post[:title] += "" if post[:type] == :link
post[:styles] = (post[:styles] || '').split(/\s*,\s*/)
post[:tags] = (post[:tags] || '').split(/\s*,\s*/)
post[:relative_url] = post[:filename].sub(/\.html$/, '')
post[:url] = @url + '/' + post[:relative_url]
post[:timestamp] = post[:timestamp].to_i
post[:content] = lines.join
post[:body] = RDiscount.new(post[:content], :smart).to_html
post[:rfc822] = Time.at(post[:timestamp]).rfc822
# comments on by default
post[:comments] = (post[:comments] == 'on' || post[:comments].nil?)
post
end.sort { |a, b| b[:timestamp] <=> a[:timestamp] }
end
private
def blog_file
File.join(@src, 'blog.json')
end
def read_blog
blog = JSON.parse(File.read(blog_file))
@title = blog['title']
@subtitle = blog['subtitle']
@url = blog['url']
end
def html(post)
Mustache.render(template(post[:type]), post)
end
def template(type)
if type == :post
@post_template ||= File.read(File.join('templates', 'blog', 'post.mustache'))
elsif type == :link
@link_template ||= File.read(File.join('templates', 'blog', 'link.mustache'))
else
raise 'unknown post type: ' + type
end
end
def rss_template(type)
if type == :post
@post_rss_template ||= File.read(File.join('templates', 'blog', 'post.rss.html'))
elsif type == :link
@link_rss_template ||= File.read(File.join('templates', 'blog', 'link.rss.html'))
else
raise 'unknown post type: ' + type
end
end
def rss_file
File.join(@blog_dest, 'sjs.rss')
end
def rss_html(post)
Mustache.render(rss_template(post[:type]), { :post => post })
end
def rss_for_posts(options = {})
title = options[:title] || @title
subtitle = options[:subtitle] || @subtitle
url = options[:url] || @url
rss_posts ||= options[:posts] || posts[0, 10]
xml = Builder::XmlMarkup.new
xml.instruct! :xml, :version => '1.0'
xml.instruct! 'xml-stylesheet', :href => 'http://samhuri.net/css/blog-all.min.css', :type => 'text/css'
rss_posts.each do |post|
post[:styles].each do |style|
xml.instruct! 'xml-stylesheet', :href => "http://samhuri.net/css/#{style}.min.css", :type => 'text/css'
end
end
xml.rss :version => '2.0' do
xml.channel do
xml.title title
xml.description subtitle
xml.link url
xml.pubDate posts.first[:rfc822]
rss_posts.each do |post|
xml.item do
xml.title post[:title]
xml.description rss_html(post)
xml.pubDate post[:rfc822]
xml.author post[:author]
xml.link post[:link] || post[:url]
xml.guid post[:url]
end
end
end
end
xml
end
end
main if $0 == __FILE__

View file

@ -1,37 +0,0 @@
#!/bin/bash
set -euo pipefail
ROOT_DIR="$(cd "$(dirname "$0")/.." && pwd)"
RUBY_VERSION="$(cat "$ROOT_DIR/.ruby-version")"
if [[ "$(uname)" = "Linux" ]]; then
echo "*** installing Linux prerequisites"
sudo apt install -y \
build-essential \
git \
inotify-tools \
libffi-dev \
libyaml-dev \
pkg-config \
zlib1g-dev
fi
cd "$ROOT_DIR"
if command -v rbenv >/dev/null 2>/dev/null; then
echo "*** using rbenv (ruby $RUBY_VERSION)"
rbenv install -s "$RUBY_VERSION"
if ! rbenv exec gem list -i bundler >/dev/null 2>/dev/null; then
rbenv exec gem install bundler
fi
rbenv exec bundle install
else
echo "*** rbenv not found, using system Ruby"
if ! gem list -i bundler >/dev/null 2>/dev/null; then
gem install bundler
fi
bundle install
fi
echo "*** done"

30
bin/combine.sh Executable file
View file

@ -0,0 +1,30 @@
#!/usr/bin/env zsh
### javascript ###
# blog
echo "request,showdown,strftime,tmpl,jquery-serializeObject,blog -> blog-all.min.js"
cat public/js/{request,showdown,strftime,tmpl,jquery-serializeObject,blog}.min.js >|public/js/blog-all.min.js
# project index
echo "gitter,store -> proj-index-all.min.js"
cat public/js/{gitter,store}.min.js >|public/js/proj-index-all.min.js
# projects
echo "gitter,store,proj -> proj-all.min.js"
cat public/js/{gitter,store,proj}.min.js >|public/js/proj-all.min.js
### css ###
# blog
echo "style,blog -> blog-all.min.css"
cat public/css/{style,blog}.min.css >|public/css/blog-all.min.css
# project index
echo "style,proj-common -> proj-index-all.min.css"
cat public/css/{style,proj-common}.min.css >|public/css/proj-index-all.min.css
# projects
echo "style,proj-common,proj -> proj-all.min.css"
cat public/css/{style,proj-common,proj}.min.css >|public/css/proj-all.min.css

21
bin/minify.sh Executable file
View file

@ -0,0 +1,21 @@
#!/usr/bin/env zsh
setopt extendedglob
[[ ! -d public/js ]] && mkdir public/js
for js (assets/js/*.js) {
target=public/js/${${js:t}%.js}.min.js
if [ ! -f $target ] || [ $js -nt $target ]; then
echo "$js -> $target"
closure < $js >| $target
fi
}
[[ ! -d public/css ]] && mkdir public/css
for css (assets/css/*.css) {
target=public/css/${${css:t}%.css}.min.css
if [ ! -f $target ] || [ $css -nt $target ]; then
echo "$css -> $target"
yui-compressor $css $target
fi
}

79
bin/projects.js Executable file
View file

@ -0,0 +1,79 @@
#!/usr/bin/env node
var fs = require('fs')
, path = require('path')
, mustache = require('mustache')
, rootDir = path.join(__dirname, '..')
, projectFile = path.join(rootDir, process.argv[2])
, templateDir = path.join(rootDir, 'templates', 'proj')
, targetDir = path.join(rootDir, process.argv[3])
try {
fs.mkdirSync(targetDir, 0775)
}
catch (e) {
if (e.code != 'EEXIST') throw e
}
function main() {
var ctx = {}
fs.readFile(path.join(templateDir, 'project.html'), function(err, html) {
if (err) throw err
ctx.template = html.toString()
fs.readFile(projectFile, function(err, json) {
if (err) throw err
var projects = JSON.parse(json).projects
, index = path.join(targetDir, 'index.html')
// write project index
fs.readFile(path.join(templateDir, 'index.html'), function(err, tpl) {
if (err) throw err
fs.mkdir(targetDir, 0775, function(err) {
if (err && err.code !== 'EEXIST') throw err
fs.unlink(index, function(err) {
if (err && err.code !== 'ENOENT') throw err
var vals = { projects: projects }
, html = mustache.to_html(tpl.toString(), vals)
fs.writeFile(index, html, function(err) {
if (err) throw err
console.log('* (project index)')
})
})
})
})
// write project pages
ctx.n = 0
projects.forEach(function(project) {
ctx.n += 1
buildProject(project.name, project, ctx)
})
})
})
}
function buildProject(name, project, ctx) {
var dir = path.join(targetDir, name)
, index = path.join(dir, 'index.html')
try {
fs.mkdirSync(dir, 0775)
}
catch (e) {
if (e.code != 'EEXIST') throw e
}
fs.unlink(index, function(err) {
if (err && err.code !== 'ENOENT') throw err
project.name = name
fs.writeFile(index, mustache.to_html(ctx.template, project), function(err) {
if (err) console.error('error: ', err.message)
ctx.n -= 1
console.log('* ' + name + (err ? ' (failed)' : ''))
if (ctx.n === 0) console.log('done projects')
})
})
}
if (module == require.main) main()

33
bin/publish.sh Executable file
View file

@ -0,0 +1,33 @@
#!/bin/bash
bail() {
echo fail: $*
exit 1
}
# exit on errors
set -e
publish_host=samhuri.net
publish_dir=samhuri.net/public/
# test
if [[ "$1" = "-t" ]]; then
prefix=echo
shift
fi
# --delete, passed to rsync
if [[ "$1" = "--delete" ]]; then
delete="--delete"
shift
fi
if [[ $# -eq 0 ]]; then
if [[ "$delete" != "" ]]; then
bail "no paths given, cowardly refusing to publish everything with --delete"
fi
$prefix rsync -aKv $delete public/* "$publish_host":"${publish_dir}"
else
$prefix rsync -aKv $delete "$@" "$publish_host":"${publish_dir}"
fi

343
discussd/discussd.js Executable file
View file

@ -0,0 +1,343 @@
#!/usr/bin/env node
var fs = require('fs')
, http = require('http')
, path = require('path')
, parseURL = require('url').parse
, keys = require('keys')
, markdown = require('markdown')
, strftime = require('strftime').strftime
, DefaultOptions = { host: 'localhost'
, port: 2020
, postsFile: path.join(__dirname, 'posts.json')
}
function main() {
var options = parseArgs(DefaultOptions)
, db = new keys.Dirty('./discuss.dirty')
, context = { db: db
, posts: null
}
, server = http.createServer(requestHandler(context))
, loadPosts = function(cb) {
readJSON(options.postsFile, function(err, posts) {
if (err) {
console.error('failed to parse posts file, is it valid JSON?')
console.dir(err)
process.exit(1)
}
if (context.posts === null) {
var n = posts.published.length
, t = strftime('%Y-%m-%d %I:%M:%S %p')
console.log('(' + t + ') ' + 'loaded discussions for ' + n + ' posts...')
}
context.posts = posts.published
if (typeof cb == 'function') cb()
})
}
, listen = function() {
console.log(process.argv[0] + ' listening on ' + options.host + ':' + options.port)
server.listen(options.port, options.host)
}
loadPosts(function() {
fs.watchFile(options.postsFile, loadPosts)
if (db._loaded) {
listen()
} else {
db.db.on('load', listen)
}
})
}
function readJSON(f, cb) {
fs.readFile(f, function(err, buf) {
var data
if (!err) {
try {
data = JSON.parse(buf.toString())
} catch (e) {
err = e
}
}
cb(err, data)
})
}
// returns a request handler that returns a string
function createTextHandler(options) {
if (typeof options === 'string') {
options = { body: options }
} else {
options = options || {}
}
var body = options.body || ''
, code = options.cody || 200
, type = options.type || 'text/plain'
, n = body.length
return function(req, res) {
var headers = res.headers || {}
headers['content-type'] = type
headers['content-length'] = n
// console.log('code: ', code)
// console.log('headers: ', JSON.stringify(headers, null, 2))
// console.log('body: ', body)
res.writeHead(code, headers)
res.end(body)
}
}
// Cross-Origin Resource Sharing
var createCorsHandler = (function() {
var AllowedOrigins = [ 'http://samhuri.net' ]
return function(handler) {
handler = handler || createTextHandler('ok')
return function(req, res) {
var origin = req.headers.origin
console.log('origin: ', origin)
console.log('index: ', AllowedOrigins.indexOf(origin))
if (AllowedOrigins.indexOf(origin) !== -1) {
res.headers = { 'Access-Control-Allow-Origin': origin
, 'Access-Control-Request-Method': 'POST, GET'
, 'Access-Control-Allow-Headers': 'content-type'
}
handler(req, res)
} else {
BadRequest(req, res)
}
}
}
}())
var DefaultHandler = createTextHandler({ code: 404, body: 'not found' })
, BadRequest = createTextHandler({ code: 400, body: 'bad request' })
, ServerError = createTextHandler({ code: 500, body: 'server error' })
, _routes = {}
function route(method, pattern, handler) {
if (typeof pattern === 'function' && !handler) {
handler = pattern
pattern = ''
}
if (!pattern || typeof pattern.exec !== 'function') {
pattern = new RegExp('^/' + pattern)
}
var route = { pattern: pattern, handler: handler }
console.log('routing ' + method, pattern)
if (!(method in _routes)) _routes[method] = []
_routes[method].push(route)
}
function resolve(method, path) {
var rs = _routes[method]
, i = rs.length
, m
, r
while (i--) {
r = rs[i]
m = r.pattern.exec ? r.pattern.exec(path) : path.match(r.pattern)
if (m) return r.handler
}
console.warn('*** using default handler, this is probably not what you want')
return DefaultHandler
}
function get(pattern, handler) {
route('GET', pattern, handler)
}
function post(pattern, handler) {
route('POST', pattern, handler)
}
function options(pattern, handler) {
route('OPTIONS', pattern, handler)
}
function handleRequest(req, res) {
var handler = resolve(req.method, req.url)
try {
handler(req, res)
} catch (e) {
console.error('!!! error handling ' + req.method, req.url)
console.dir(e)
}
}
function commentServer(context) {
return { get: getComments
, count: countComments
, post: postComment
}
function addComment(post, name, email, url, body, timestamp) {
var comments = context.db.get(post) || []
comments.push({ id: comments.length + 1
, name: name
, email: email
, url: url
, body: body
, timestamp: timestamp || Date.now()
})
context.db.set(post, comments)
console.log('[' + timestamp + '] comment on ' + post)
console.log('name:', name)
console.log('email:', email)
console.log('url:', url)
console.log('body:', body)
}
function getComments(req, res) {
var post = parseURL(req.url).pathname.replace(/^\/comments\//, '')
, comments
if (context.posts.indexOf(post) === -1) {
console.warn('post not found: ' + post)
BadRequest(req, res)
return
}
comments = context.db.get(post) || []
comments.forEach(function(c, i) {
c.id = c.id || (i + 1)
})
res.respond({comments: comments.map(function(c) {
delete c.email
c.html = markdown.parse(c.body)
// FIXME discount has a race condition, sometimes gives a string
// with trailing garbage.
while (c.html.charAt(c.html.length - 1) !== '>') {
console.log("!!! removing trailing garbage from discount's html")
c.html = c.html.slice(0, c.html.length - 1)
}
return c
})})
}
function postComment(req, res) {
var body = ''
req.on('data', function(chunk) { body += chunk })
req.on('end', function() {
var data, post, name, email, url, timestamp
try {
data = JSON.parse(body)
} catch (e) {
console.log('not json -> ' + body)
BadRequest(req, res)
return
}
post = (data.post || '').trim()
name = (data.name || 'anonymous').trim()
email = (data.email || '').trim()
url = (data.url || '').trim()
if (url && !url.match(/^https?:\/\//)) url = 'http://' + url
body = data.body || ''
if (!post || !body || context.posts.indexOf(post) === -1) {
console.warn('mising post, body, or post not found: ' + post)
console.warn('body: ', body)
BadRequest(req, res)
return
}
timestamp = +data.timestamp || Date.now()
addComment(post, name, email, url, body, timestamp)
res.respond()
})
}
function countComments(req, res) {
var post = parseURL(req.url).pathname.replace(/^\/count\//, '')
, comments
if (context.posts.indexOf(post) === -1) {
console.warn('post not found: ' + post)
BadRequest(req, res)
return
}
comments = context.db.get(post) || []
res.respond({count: comments.length})
}
}
function requestHandler(context) {
var comments = commentServer(context)
get(/comments\//, createCorsHandler(comments.get))
get(/count\//, createCorsHandler(comments.count))
post(/comment\/?/, createCorsHandler(comments.post))
options(createCorsHandler())
return function(req, res) {
console.log(req.method + ' ' + req.url)
res.respond = function(obj) {
var s = ''
var headers = res.headers || {}
if (obj) {
try {
s = JSON.stringify(obj)
} catch (e) {
ServerError(req, res)
return
}
headers['content-type'] = 'application/json'
}
headers['content-length'] = s.length
/*
console.log('code: ', s ? 200 : 204)
console.log('headers:', headers)
console.log('body:', s)
*/
res.writeHead(s ? 200 : 204, headers)
res.end(s)
}
handleRequest(req, res)
}
}
function parseArgs(defaults) {
var expectingArg
, options = Object.keys(defaults).reduce(function(os, k) {
os[k] = defaults[k]
return os
}, {})
process.argv.slice(2).forEach(function(arg) {
if (expectingArg) {
options[expectingArg] = arg
expectingArg = null
} else {
// remove leading dashes
while (arg.charAt(0) === '-') {
arg = arg.slice(1)
}
switch (arg) {
case 'h':
case 'host':
expectingArg = 'host'
break
case 'p':
case 'port':
expectingArg = 'port'
break
default:
console.warn('unknown option: ' + arg + ' (setting anyway)')
expectingArg = arg
}
}
})
return options
}
var missingParams = (function() {
var requiredParams = 'name email body'.split(' ')
return function(d) {
var anyMissing = false
requiredParams.forEach(function(p) {
var v = (d[p] || '').trim()
if (!v) anyMissing = true
})
return anyMissing
}
}())
if (module == require.main) main()

27
discussd/package.json Normal file
View file

@ -0,0 +1,27 @@
{ "name" : "discussd"
, "description" : "comment server"
, "version" : "1.0.0"
, "homepage" : "http://samhuri.net/proj/samhuri.net"
, "author" : "Sami Samhuri <sami@samhuri.net>"
, "repository" :
{ "type" : "git"
, "url" : "https://github.com/samsonjs/samhuri.net.git"
}
, "bugs" :
{ "mail" : "sami@samhuri.net"
, "url" : "https://github.com/samsonjs/samhuri.net/issues"
}
, "dependencies" :
{ "dirty" : "0.9.x"
, "keys" : "0.1.x"
, "markdown" : "0.5.x"
, "strftime" : "0.6.x"
}
, "bin" : { "discussd" : "./discussd/discussd.js" }
, "engines" : { "node" : ">=0.6.0" }
, "licenses" :
[ { "type" : "MIT"
, "url" : "http://github.com/samsonjs/samhuri.net/raw/master/LICENSE"
}
]
}

47
f/fiveshift.css Normal file
View file

@ -0,0 +1,47 @@
/* Fluid widths */
body
{ width: 100%
; min-width: 0
; font-size: 80%
}
#masthead
{ width: 100% }
#masthead .grid_24
{ text-align: center }
#header .container_24,
#footer .container_24
#masthead .container_24, /* doesn't seem to work */
#content .container_24, /* doesn't seem to work */
#content .container_24 .grid_15, /* doesn't seem to work */
.sidebar, /* doesn't seem to work */
{ width: 97% }
#masthead .grid_24 { width: 97% }
#masthead .grid_24 .grid_7 { width: 100%; margin-bottom: 1em }
#masthead .grid_24 .grid_11 { width: 95% }
#masthead .hosts { width: 100%; padding-right: 10px }
#masthead .hosts .host
{ width: 44%
; display: inline-block
; float: none
; clear: left
}
#episode { min-height: 0 }
#episode h2 { font-size: 1.4em }
h5, #episode h5 { font-size: 0.8em; line-height: 1.2em }
#episode p,
#episode #sponsors
{ font-size: 0.7em; line-height: 1.3em }
#episode #episode_links { font-size: 0.7em; line-height: 1.2em }
.player { width: 100% }
.player .transport { width: 65% }

26
f/fiveshift.js Normal file
View file

@ -0,0 +1,26 @@
if (!window.__fiveShiftInjected__) {
window.__fiveShiftInjected__ = true
$(function() {
// load custom css
var head = document.getElementsByTagName('head')[0]
, css = document.createElement('link')
css.rel = 'stylesheet'
css.type = 'text/css'
css.href = 'http://samhuri.net/f/fiveshift.css?t=' + +new Date()
head.appendChild(css)
// These don't center properly via CSS for some reason
;[ '#masthead .container_24'
, '#content .container_24'
, '#content .container_24 .grid_15'
, '.sidebar'
].forEach(function(selector) {
$(selector).css('width', '97%')
})
// Fix up the viewport
$('meta[name="viewport"]').attr('content','width=device-width,initial-scale=1.0')
})
}

BIN
f/frank_abagnale_jr.pdf Normal file

Binary file not shown.

1
f/hi.js Normal file
View file

@ -0,0 +1 @@
alert('hi')

Binary file not shown.

View file

@ -1,15 +0,0 @@
require "pressa/site"
require "pressa/site_generator"
require "pressa/plugin"
require "pressa/posts/plugin"
require "pressa/projects/plugin"
require "pressa/utils/markdown_renderer"
require "pressa/utils/gemini_markdown_renderer"
require "pressa/config/loader"
module Pressa
def self.create_site(source_path: ".", url_override: nil, output_format: "html")
loader = Config::Loader.new(source_path:)
loader.build_site(url_override:, output_format:)
end
end

View file

@ -1,440 +0,0 @@
require "pressa/site"
require "pressa/posts/plugin"
require "pressa/projects/plugin"
require "pressa/utils/markdown_renderer"
require "pressa/utils/gemini_markdown_renderer"
require "pressa/config/simple_toml"
module Pressa
module Config
class ValidationError < StandardError; end
class Loader
REQUIRED_SITE_KEYS = %w[author email title description url].freeze
REQUIRED_PROJECT_KEYS = %w[name title description url].freeze
def initialize(source_path:)
@source_path = source_path
end
def build_site(url_override: nil, output_format: "html")
site_config = load_toml("site.toml")
validate_required!(site_config, REQUIRED_SITE_KEYS, context: "site.toml")
validate_no_legacy_output_keys!(site_config)
normalized_output_format = normalize_output_format(output_format)
site_url = url_override || site_config["url"]
output_options = build_output_options(site_config:, output_format: normalized_output_format)
plugins = build_plugins(site_config, output_format: normalized_output_format)
Site.new(
author: site_config["author"],
email: site_config["email"],
title: site_config["title"],
description: site_config["description"],
url: site_url,
fediverse_creator: build_optional_string(
site_config["fediverse_creator"],
context: "site.toml fediverse_creator"
),
image_url: normalize_image_url(site_config["image_url"], site_url),
scripts: build_scripts(site_config["scripts"], context: "site.toml scripts"),
styles: build_styles(site_config["styles"], context: "site.toml styles"),
plugins:,
renderers: build_renderers(output_format: normalized_output_format),
output_format: normalized_output_format,
output_options:
)
end
private
def load_toml(filename)
path = File.join(@source_path, filename)
SimpleToml.load_file(path)
rescue ParseError => e
raise ValidationError, e.message
end
def build_projects(projects_config)
projects = projects_config["projects"]
raise ValidationError, "Missing required top-level array 'projects' in projects.toml" unless projects
raise ValidationError, "Expected 'projects' to be an array in projects.toml" unless projects.is_a?(Array)
projects.map.with_index do |project, index|
unless project.is_a?(Hash)
raise ValidationError, "Project entry #{index + 1} must be a table in projects.toml"
end
validate_required!(project, REQUIRED_PROJECT_KEYS, context: "projects.toml project ##{index + 1}")
Projects::Project.new(
name: project["name"],
title: project["title"],
description: project["description"],
url: project["url"]
)
end
end
def validate_required!(hash, keys, context:)
missing = keys.reject do |key|
hash[key].is_a?(String) && !hash[key].strip.empty?
end
return if missing.empty?
raise ValidationError, "Missing required #{context} keys: #{missing.join(", ")}"
end
def validate_no_legacy_output_keys!(site_config)
if site_config.key?("output")
raise ValidationError, "Legacy key 'output' is no longer supported; use 'outputs'"
end
if site_config.key?("mastodon_url") || site_config.key?("github_url")
raise ValidationError, "Legacy keys 'mastodon_url'/'github_url' are no longer supported; use outputs.html.remote_links or outputs.gemini.home_links"
end
end
def build_plugins(site_config, output_format:)
plugin_names = parse_plugin_names(site_config["plugins"])
plugin_names.map.with_index do |plugin_name, index|
case plugin_name
when "posts"
posts_plugin_for(output_format)
when "projects"
build_projects_plugin(site_config, output_format:)
else
raise ValidationError, "Unknown plugin '#{plugin_name}' at site.toml plugins[#{index}]"
end
end
end
def build_renderers(output_format:)
case output_format
when "html"
[Utils::MarkdownRenderer.new]
when "gemini"
[Utils::GeminiMarkdownRenderer.new]
else
raise ValidationError, "Unsupported output format '#{output_format}'"
end
end
def posts_plugin_for(output_format)
case output_format
when "html"
Posts::HTMLPlugin.new
when "gemini"
Posts::GeminiPlugin.new
else
raise ValidationError, "Unsupported output format '#{output_format}'"
end
end
def parse_plugin_names(value)
return [] if value.nil?
raise ValidationError, "Expected site.toml plugins to be an array" unless value.is_a?(Array)
seen = {}
value.map.with_index do |plugin_name, index|
unless plugin_name.is_a?(String) && !plugin_name.strip.empty?
raise ValidationError, "Expected site.toml plugins[#{index}] to be a non-empty String"
end
normalized_plugin_name = plugin_name.strip
if seen[normalized_plugin_name]
raise ValidationError, "Duplicate plugin '#{normalized_plugin_name}' in site.toml plugins"
end
seen[normalized_plugin_name] = true
normalized_plugin_name
end
end
def build_projects_plugin(site_config, output_format:)
projects_plugin = hash_or_empty(site_config["projects_plugin"], "site.toml projects_plugin")
projects_config = load_toml("projects.toml")
projects = build_projects(projects_config)
case output_format
when "html"
Projects::HTMLPlugin.new(
projects:,
scripts: build_scripts(projects_plugin["scripts"], context: "site.toml projects_plugin.scripts"),
styles: build_styles(projects_plugin["styles"], context: "site.toml projects_plugin.styles")
)
when "gemini"
Projects::GeminiPlugin.new(projects:)
else
raise ValidationError, "Unsupported output format '#{output_format}'"
end
end
def hash_or_empty(value, context)
return {} if value.nil?
return value if value.is_a?(Hash)
raise ValidationError, "Expected #{context} to be a table"
end
def build_output_options(site_config:, output_format:)
outputs_config = hash_or_empty(site_config["outputs"], "site.toml outputs")
validate_allowed_keys!(
outputs_config,
allowed_keys: %w[html gemini],
context: "site.toml outputs"
)
format_config = hash_or_empty(outputs_config[output_format], "site.toml outputs.#{output_format}")
case output_format
when "html"
build_html_output_options(format_config:)
when "gemini"
build_gemini_output_options(format_config:)
else
raise ValidationError, "Unsupported output format '#{output_format}'"
end
end
def build_html_output_options(format_config:)
validate_allowed_keys!(
format_config,
allowed_keys: %w[exclude_public remote_links],
context: "site.toml outputs.html"
)
public_excludes = build_public_excludes(
format_config["exclude_public"],
context: "site.toml outputs.html.exclude_public"
)
remote_links = build_output_links(
format_config["remote_links"],
context: "site.toml outputs.html.remote_links",
allow_icon: true,
allow_label_optional: false,
allow_string_entries: false
)
HTMLOutputOptions.new(
public_excludes:,
remote_links:
)
end
def build_gemini_output_options(format_config:)
validate_allowed_keys!(
format_config,
allowed_keys: %w[exclude_public recent_posts_limit home_links],
context: "site.toml outputs.gemini"
)
public_excludes = build_public_excludes(
format_config["exclude_public"],
context: "site.toml outputs.gemini.exclude_public"
)
home_links = build_output_links(
format_config["home_links"],
context: "site.toml outputs.gemini.home_links",
allow_icon: false,
allow_label_optional: true,
allow_string_entries: true
)
recent_posts_limit = build_recent_posts_limit(
format_config["recent_posts_limit"],
context: "site.toml outputs.gemini.recent_posts_limit"
)
GeminiOutputOptions.new(
public_excludes:,
recent_posts_limit:,
home_links:
)
end
def build_scripts(value, context:)
entries = array_or_empty(value, context)
entries.map.with_index do |item, index|
case item
when String
validate_asset_path!(
item,
context: "#{context}[#{index}]"
)
Script.new(src: item, defer: true)
when Hash
src = item["src"]
raise ValidationError, "Expected #{context}[#{index}].src to be a String" unless src.is_a?(String) && !src.empty?
validate_asset_path!(
src,
context: "#{context}[#{index}].src"
)
defer = item.key?("defer") ? item["defer"] : true
unless [true, false].include?(defer)
raise ValidationError, "Expected #{context}[#{index}].defer to be a Boolean"
end
Script.new(src:, defer:)
else
raise ValidationError, "Expected #{context}[#{index}] to be a String or table"
end
end
end
def build_styles(value, context:)
entries = array_or_empty(value, context)
entries.map.with_index do |item, index|
case item
when String
validate_asset_path!(
item,
context: "#{context}[#{index}]"
)
Stylesheet.new(href: item)
when Hash
href = item["href"]
raise ValidationError, "Expected #{context}[#{index}].href to be a String" unless href.is_a?(String) && !href.empty?
validate_asset_path!(
href,
context: "#{context}[#{index}].href"
)
Stylesheet.new(href:)
else
raise ValidationError, "Expected #{context}[#{index}] to be a String or table"
end
end
end
def array_or_empty(value, context)
return [] if value.nil?
return value if value.is_a?(Array)
raise ValidationError, "Expected #{context} to be an array"
end
def normalize_image_url(value, site_url)
return nil if value.nil?
return value if value.start_with?("http://", "https://")
normalized = value.start_with?("/") ? value : "/#{value}"
"#{site_url}#{normalized}"
end
def validate_asset_path!(value, context:)
return if value.start_with?("/", "http://", "https://")
raise ValidationError, "Expected #{context} to start with / or use http(s) scheme"
end
def build_public_excludes(value, context:)
entries = array_or_empty(value, context)
entries.map.with_index do |entry, index|
unless entry.is_a?(String) && !entry.strip.empty?
raise ValidationError, "Expected #{context}[#{index}] to be a non-empty String"
end
entry.strip
end
end
def build_output_links(value, context:, allow_icon:, allow_label_optional:, allow_string_entries:)
entries = array_or_empty(value, context)
entries.map.with_index do |entry, index|
if allow_string_entries && entry.is_a?(String)
href = entry
unless !href.strip.empty?
raise ValidationError, "Expected #{context}[#{index}] to be a non-empty String"
end
validate_link_href!(href.strip, context: "#{context}[#{index}]")
next OutputLink.new(label: nil, href: href.strip, icon: nil)
end
unless entry.is_a?(Hash)
raise ValidationError, "Expected #{context}[#{index}] to be a String or table"
end
allowed_keys = allow_icon ? %w[label href icon] : %w[label href]
validate_allowed_keys!(
entry,
allowed_keys:,
context: "#{context}[#{index}]"
)
href = entry["href"]
unless href.is_a?(String) && !href.strip.empty?
raise ValidationError, "Expected #{context}[#{index}].href to be a non-empty String"
end
validate_link_href!(href.strip, context: "#{context}[#{index}].href")
label = entry["label"]
if label.nil?
unless allow_label_optional
raise ValidationError, "Expected #{context}[#{index}].label to be a non-empty String"
end
else
unless label.is_a?(String) && !label.strip.empty?
raise ValidationError, "Expected #{context}[#{index}].label to be a non-empty String"
end
end
icon = entry["icon"]
unless allow_icon
if entry.key?("icon")
raise ValidationError, "Unexpected #{context}[#{index}].icon; icons are only supported for outputs.html.remote_links"
end
icon = nil
end
if allow_icon && !icon.nil? && (!icon.is_a?(String) || icon.strip.empty?)
raise ValidationError, "Expected #{context}[#{index}].icon to be a non-empty String"
end
OutputLink.new(label: label&.strip, href: href.strip, icon: icon&.strip)
end
end
def validate_link_href!(value, context:)
return if value.start_with?("/")
return if value.match?(/\A[a-z][a-z0-9+\-.]*:/i)
raise ValidationError, "Expected #{context} to start with / or include a URI scheme"
end
def build_recent_posts_limit(value, context:)
return 20 if value.nil?
return value if value.is_a?(Integer) && value.positive?
raise ValidationError, "Expected #{context} to be a positive Integer"
end
def normalize_output_format(output_format)
value = output_format.to_s.strip.downcase
return value if %w[html gemini].include?(value)
raise ValidationError, "Unsupported output format '#{output_format}'"
end
def build_optional_string(value, context:)
return nil if value.nil?
return value if value.is_a?(String) && !value.strip.empty?
raise ValidationError, "Expected #{context} to be a non-empty String"
end
def validate_allowed_keys!(hash, allowed_keys:, context:)
unknown = hash.keys - allowed_keys
return if unknown.empty?
raise ValidationError, "Unknown key(s) in #{context}: #{unknown.join(", ")}"
end
end
end
end

View file

@ -1,224 +0,0 @@
require "json"
module Pressa
module Config
class ParseError < StandardError; end
class SimpleToml
def self.load_file(path)
new.parse(File.read(path))
rescue Errno::ENOENT
raise ParseError, "Config file not found: #{path}"
end
def parse(content)
root = {}
current_table = root
lines = content.each_line.to_a
line_index = 0
while line_index < lines.length
line = lines[line_index]
line_number = line_index + 1
source = strip_comments(line).strip
if source.empty?
line_index += 1
next
end
if source =~ /\A\[\[(.+)\]\]\z/
current_table = start_array_table(root, Regexp.last_match(1), line_number)
line_index += 1
next
end
if source =~ /\A\[(.+)\]\z/
current_table = start_table(root, Regexp.last_match(1), line_number)
line_index += 1
next
end
key, raw_value = parse_assignment(source, line_number)
while needs_continuation?(raw_value)
line_index += 1
raise ParseError, "Unterminated value for key '#{key}' at line #{line_number}" if line_index >= lines.length
continuation = strip_comments(lines[line_index]).strip
next if continuation.empty?
raw_value = "#{raw_value} #{continuation}"
end
if current_table.key?(key)
raise ParseError, "Duplicate key '#{key}' at line #{line_number}"
end
current_table[key] = parse_value(raw_value, line_number)
line_index += 1
end
root
end
private
def start_array_table(root, raw_path, line_number)
keys = parse_path(raw_path, line_number)
parent = ensure_path(root, keys[0..-2], line_number)
table_name = keys.last
parent[table_name] ||= []
array = parent[table_name]
unless array.is_a?(Array)
raise ParseError, "Expected array for '[[#{raw_path}]]' at line #{line_number}"
end
table = {}
array << table
table
end
def start_table(root, raw_path, line_number)
keys = parse_path(raw_path, line_number)
ensure_path(root, keys, line_number)
end
def ensure_path(root, keys, line_number)
cursor = root
keys.each do |key|
cursor[key] ||= {}
unless cursor[key].is_a?(Hash)
raise ParseError, "Expected table path '#{keys.join(".")}' at line #{line_number}"
end
cursor = cursor[key]
end
cursor
end
def parse_path(raw_path, line_number)
keys = raw_path.split(".").map(&:strip)
if keys.empty? || keys.any? { |part| part.empty? || part !~ /\A[A-Za-z0-9_]+\z/ }
raise ParseError, "Invalid table path '#{raw_path}' at line #{line_number}"
end
keys
end
def parse_assignment(source, line_number)
separator = index_of_unquoted(source, "=")
raise ParseError, "Invalid assignment at line #{line_number}" unless separator
key = source[0...separator].strip
value = source[(separator + 1)..].strip
if key.empty? || key !~ /\A[A-Za-z0-9_]+\z/
raise ParseError, "Invalid key '#{key}' at line #{line_number}"
end
raise ParseError, "Missing value for key '#{key}' at line #{line_number}" if value.empty?
[key, value]
end
def parse_value(raw_value, line_number)
JSON.parse(raw_value)
rescue JSON::ParserError
raise ParseError, "Unsupported TOML value '#{raw_value}' at line #{line_number}"
end
def needs_continuation?(source)
in_string = false
escaped = false
depth = 0
source.each_char do |char|
if in_string
if escaped
escaped = false
elsif char == "\\"
escaped = true
elsif char == '"'
in_string = false
end
next
end
case char
when '"'
in_string = true
when "[", "{"
depth += 1
when "]", "}"
depth -= 1
end
end
in_string || depth.positive?
end
def strip_comments(line)
output = +""
in_string = false
escaped = false
line.each_char do |char|
if in_string
output << char
if escaped
escaped = false
elsif char == "\\"
escaped = true
elsif char == '"'
in_string = false
end
next
end
case char
when '"'
in_string = true
output << char
when "#"
break
else
output << char
end
end
output
end
def index_of_unquoted(source, target)
in_string = false
escaped = false
source.each_char.with_index do |char, index|
if in_string
if escaped
escaped = false
elsif char == "\\"
escaped = true
elsif char == '"'
in_string = false
end
next
end
if char == '"'
in_string = true
next
end
return index if char == target
end
nil
end
end
end
end

View file

@ -1,11 +0,0 @@
module Pressa
class Plugin
def setup(site:, source_path:)
raise NotImplementedError, "#{self.class}#setup must be implemented"
end
def render(site:, target_path:)
raise NotImplementedError, "#{self.class}#render must be implemented"
end
end
end

View file

@ -1,111 +0,0 @@
require "pressa/utils/file_writer"
require "pressa/utils/gemtext_renderer"
module Pressa
module Posts
class GeminiWriter
RECENT_POSTS_LIMIT = 20
def initialize(site:, posts_by_year:)
@site = site
@posts_by_year = posts_by_year
end
def write_posts(target_path:)
@posts_by_year.all_posts.each do |post|
write_post(post:, target_path:)
end
end
def write_recent_posts(target_path:, limit: RECENT_POSTS_LIMIT)
rows = ["# #{@site.title}", ""]
home_links.each do |link|
label = link.label&.strip
rows << if label.nil? || label.empty?
"=> #{link.href}"
else
"=> #{link.href} #{label}"
end
end
rows << "" unless home_links.empty?
rows << "## Recent posts"
rows << ""
@posts_by_year.recent_posts(limit).each do |post|
rows << post_link_line(post)
end
rows << ""
rows << "=> #{web_url_for("/")} Website"
rows << ""
file_path = File.join(target_path, "index.gmi")
Utils::FileWriter.write(path: file_path, content: rows.join("\n"))
end
def write_posts_index(target_path:)
rows = ["# #{@site.title} posts", "## Feed", ""]
@posts_by_year.all_posts.each do |post|
rows.concat(post_listing_lines(post))
end
rows << ""
rows << "=> / Home"
rows << "=> #{web_url_for("/posts/")} Read on the web"
rows << ""
content = rows.join("\n")
Utils::FileWriter.write(path: File.join(target_path, "posts", "index.gmi"), content:)
Utils::FileWriter.write(path: File.join(target_path, "posts", "feed.gmi"), content:)
end
private
def write_post(post:, target_path:)
rows = ["# #{post.title}", "", "#{post.formatted_date} by #{post.author}", ""]
if post.link_post?
rows << "=> #{post.link}"
rows << ""
end
gemtext_body = Utils::GemtextRenderer.render(post.markdown_body)
rows << gemtext_body unless gemtext_body.empty?
rows << "" unless rows.last.to_s.empty?
rows << "=> /posts Back to posts"
rows << "=> #{web_url_for("#{post.path}/")} Read on the web" if include_web_link?(post)
rows << ""
file_path = File.join(target_path, post.path.sub(%r{^/}, ""), "index.gmi")
Utils::FileWriter.write(path: file_path, content: rows.join("\n"))
end
def post_link_line(post)
"=> #{post.path}/ #{post.date.strftime("%Y-%m-%d")} - #{post.title}"
end
def post_listing_lines(post)
rows = [post_link_line(post)]
rows << "=> #{post.link}" if post.link_post?
rows
end
def include_web_link?(post)
markdown_without_fences = post.markdown_body.gsub(/```.*?```/m, "")
markdown_without_fences.match?(
%r{<\s*(?:a|p|div|span|ul|ol|li|audio|video|source|img|h[1-6]|blockquote|pre|code|table|tr|td|th|em|strong|br)\b}i
)
end
def web_url_for(path)
@site.url_for(path)
end
def home_links
@site.gemini_output_options&.home_links || []
end
end
end
end

View file

@ -1,76 +0,0 @@
require "json"
require "pressa/utils/file_writer"
require "pressa/views/feed_post_view"
module Pressa
module Posts
class JSONFeedWriter
FEED_VERSION = "https://jsonfeed.org/version/1.1"
def initialize(site:, posts_by_year:)
@site = site
@posts_by_year = posts_by_year
end
def write_feed(target_path:, limit: 30)
recent = @posts_by_year.recent_posts(limit)
feed = build_feed(recent)
json = JSON.pretty_generate(feed)
file_path = File.join(target_path, "feed.json")
Utils::FileWriter.write(path: file_path, content: json)
end
private
def build_feed(posts)
author = {
name: @site.author,
url: @site.url,
avatar: @site.image_url
}
items = posts.map { |post| feed_item(post) }
{
icon: icon_url,
favicon: favicon_url,
items: items,
home_page_url: @site.url,
author:,
version: FEED_VERSION,
authors: [author],
feed_url: @site.url_for("/feed.json"),
language: "en-CA",
title: @site.title
}
end
def icon_url
@site.url_for("/images/apple-touch-icon-300.png")
end
def favicon_url
@site.url_for("/images/apple-touch-icon-80.png")
end
def feed_item(post)
content_html = Views::FeedPostView.new(post:, site: @site).call
permalink = @site.url_for(post.path)
item = {}
item[:url] = permalink
item[:external_url] = post.link if post.link_post?
item[:tags] = post.tags unless post.tags.empty?
item[:content_html] = content_html
item[:title] = post.link_post? ? "#{post.title}" : post.title
item[:author] = {name: post.author}
item[:date_published] = post.date.iso8601
item[:id] = permalink
item
end
end
end
end

View file

@ -1,50 +0,0 @@
require "yaml"
require "date"
module Pressa
module Posts
class PostMetadata
REQUIRED_FIELDS = %w[Title Author Date Timestamp].freeze
attr_reader :title, :author, :date, :formatted_date, :link, :tags
def initialize(yaml_hash)
@raw = yaml_hash
validate_required_fields!
parse_fields
end
def self.parse(content)
if content =~ /\A---\s*\n(.*?)\n---\s*\n/m
yaml_content = $1
yaml_hash = YAML.safe_load(yaml_content, permitted_classes: [Date, Time])
new(yaml_hash)
else
raise "No YAML front-matter found in post"
end
end
private
def validate_required_fields!
missing = REQUIRED_FIELDS.reject { |field| @raw.key?(field) }
raise "Missing required fields: #{missing.join(", ")}" unless missing.empty?
end
def parse_fields
@title = @raw["Title"]
@author = @raw["Author"]
timestamp = @raw["Timestamp"]
@date = timestamp.is_a?(String) ? DateTime.parse(timestamp) : timestamp.to_datetime
@formatted_date = @raw["Date"]
@link = @raw["Link"]
@tags = parse_tags(@raw["Tags"])
end
def parse_tags(value)
return [] if value.nil?
value.is_a?(Array) ? value : value.split(",").map(&:strip)
end
end
end
end

View file

@ -1,96 +0,0 @@
require "dry-struct"
require "pressa/site"
module Pressa
module Posts
class Post < Dry::Struct
attribute :slug, Types::String
attribute :title, Types::String
attribute :author, Types::String
attribute :date, Types::Params::DateTime
attribute :formatted_date, Types::String
attribute :link, Types::String.optional.default(nil)
attribute :tags, Types::Array.of(Types::String).default([].freeze)
attribute :body, Types::String
attribute :markdown_body, Types::String.default("".freeze)
attribute :excerpt, Types::String
attribute :path, Types::String
def link_post?
!link.nil?
end
def year
date.year
end
def month
date.month
end
def formatted_month
date.strftime("%B")
end
def padded_month
format("%02d", month)
end
end
class Month < Dry::Struct
attribute :name, Types::String
attribute :number, Types::Integer
attribute :padded, Types::String
def self.from_date(date)
new(
name: date.strftime("%B"),
number: date.month,
padded: format("%02d", date.month)
)
end
end
class MonthPosts < Dry::Struct
attribute :month, Month
attribute :posts, Types::Array.of(Post)
def sorted_posts
posts.sort_by(&:date).reverse
end
end
class YearPosts < Dry::Struct
attribute :year, Types::Integer
attribute :by_month, Types::Hash.map(Types::Integer, MonthPosts)
def sorted_months
by_month.keys.sort.reverse.map { |month_num| by_month[month_num] }
end
def all_posts
by_month.values.flat_map(&:posts).sort_by(&:date).reverse
end
end
class PostsByYear < Dry::Struct
attribute :by_year, Types::Hash.map(Types::Integer, YearPosts)
def sorted_years
by_year.keys.sort.reverse
end
def all_posts
by_year.values.flat_map(&:all_posts).sort_by(&:date).reverse
end
def recent_posts(limit = 10)
all_posts.take(limit)
end
def earliest_year
by_year.keys.min
end
end
end
end

View file

@ -1,60 +0,0 @@
require "pressa/plugin"
require "pressa/posts/repo"
require "pressa/posts/writer"
require "pressa/posts/gemini_writer"
require "pressa/posts/json_feed"
require "pressa/posts/rss_feed"
module Pressa
module Posts
class BasePlugin < Pressa::Plugin
attr_reader :posts_by_year
def setup(site:, source_path:)
posts_dir = File.join(source_path, "posts")
return unless Dir.exist?(posts_dir)
repo = PostRepo.new
@posts_by_year = repo.read_posts(posts_dir)
end
end
class HTMLPlugin < BasePlugin
def render(site:, target_path:)
return unless @posts_by_year
writer = PostWriter.new(site:, posts_by_year: @posts_by_year)
writer.write_posts(target_path:)
writer.write_recent_posts(target_path:, limit: 10)
writer.write_archive(target_path:)
writer.write_year_indexes(target_path:)
writer.write_month_rollups(target_path:)
json_feed = JSONFeedWriter.new(site:, posts_by_year: @posts_by_year)
json_feed.write_feed(target_path:, limit: 30)
rss_feed = RSSFeedWriter.new(site:, posts_by_year: @posts_by_year)
rss_feed.write_feed(target_path:, limit: 30)
end
end
class GeminiPlugin < BasePlugin
def render(site:, target_path:)
return unless @posts_by_year
writer = GeminiWriter.new(site:, posts_by_year: @posts_by_year)
writer.write_posts(target_path:)
writer.write_recent_posts(target_path:, limit: gemini_recent_posts_limit(site))
writer.write_posts_index(target_path:)
end
private
def gemini_recent_posts_limit(site)
site.gemini_output_options&.recent_posts_limit || GeminiWriter::RECENT_POSTS_LIMIT
end
end
Plugin = HTMLPlugin
end
end

View file

@ -1,125 +0,0 @@
require "kramdown"
require "pressa/posts/models"
require "pressa/posts/metadata"
module Pressa
module Posts
class PostRepo
EXCERPT_LENGTH = 300
def initialize(output_path: "posts")
@output_path = output_path
@posts_by_year = {}
end
def read_posts(posts_dir)
enumerate_markdown_files(posts_dir) do |file_path|
post = read_post(file_path)
add_post_to_hierarchy(post)
end
PostsByYear.new(by_year: @posts_by_year)
end
private
def enumerate_markdown_files(dir, &block)
Dir.glob(File.join(dir, "**", "*.md")).each(&block)
end
def read_post(file_path)
content = File.read(file_path)
metadata = PostMetadata.parse(content)
body_markdown = content.sub(/\A---\s*\n.*?\n---\s*\n/m, "")
html_body = render_markdown(body_markdown)
slug = File.basename(file_path, ".md")
path = generate_path(slug, metadata.date)
excerpt = generate_excerpt(body_markdown)
Post.new(
slug:,
title: metadata.title,
author: metadata.author,
date: metadata.date,
formatted_date: metadata.formatted_date,
link: metadata.link,
tags: metadata.tags,
body: html_body,
markdown_body: body_markdown,
excerpt:,
path:
)
end
def render_markdown(markdown)
Kramdown::Document.new(
markdown,
input: "GFM",
hard_wrap: false,
syntax_highlighter: "rouge",
syntax_highlighter_opts: {
line_numbers: false,
wrap: true
}
).to_html
end
def generate_path(slug, date)
year = date.year
month = format("%02d", date.month)
"/#{@output_path}/#{year}/#{month}/#{slug}"
end
def generate_excerpt(markdown)
text = markdown.dup
text.gsub!(/!\[[^\]]*\]\([^)]+\)/, "")
text.gsub!(/!\[[^\]]*\]\[[^\]]+\]/, "")
text.gsub!(/\[([^\]]+)\]\([^)]+\)/, '\1')
text.gsub!(/\[([^\]]+)\]\[[^\]]+\]/, '\1')
text.gsub!(/(?m)^\[[^\]]+\]:\s*\S.*$/, "")
text.gsub!(/<[^>]+>/, "")
text.gsub!(/\s+/, " ")
text.strip!
return "..." if text.empty?
"#{text[0...EXCERPT_LENGTH]}..."
end
def add_post_to_hierarchy(post)
year = post.year
month_num = post.month
@posts_by_year[year] ||= create_year_posts(year)
year_posts = @posts_by_year[year]
month_posts = year_posts.by_month[month_num]
if month_posts
updated_posts = month_posts.posts + [post]
year_posts.by_month[month_num] = MonthPosts.new(
month: month_posts.month,
posts: updated_posts
)
else
month = Month.from_date(post.date)
year_posts.by_month[month_num] = MonthPosts.new(
month:,
posts: [post]
)
end
end
def create_year_posts(year)
YearPosts.new(year:, by_month: {})
end
end
end
end

View file

@ -1,53 +0,0 @@
require "builder"
require "pressa/utils/file_writer"
require "pressa/views/feed_post_view"
module Pressa
module Posts
class RSSFeedWriter
def initialize(site:, posts_by_year:)
@site = site
@posts_by_year = posts_by_year
end
def write_feed(target_path:, limit: 30)
recent = @posts_by_year.recent_posts(limit)
xml = Builder::XmlMarkup.new(indent: 2)
xml.instruct! :xml, version: "1.0", encoding: "UTF-8"
xml.rss :version => "2.0",
"xmlns:atom" => "http://www.w3.org/2005/Atom",
"xmlns:content" => "http://purl.org/rss/1.0/modules/content/" do
xml.channel do
xml.title @site.title
xml.link @site.url
xml.description @site.description
xml.pubDate recent.first.date.rfc822 if recent.any?
xml.tag! "atom:link", href: @site.url_for("/feed.xml"), rel: "self", type: "application/rss+xml"
recent.each do |post|
xml.item do
title = post.link_post? ? "#{post.title}" : post.title
permalink = @site.url_for(post.path)
xml.title title
xml.link permalink
xml.guid permalink, isPermaLink: "true"
xml.pubDate post.date.rfc822
xml.author post.author
xml.tag!("content:encoded") { xml.cdata!(render_feed_post(post)) }
end
end
end
end
file_path = File.join(target_path, "feed.xml")
Utils::FileWriter.write(path: file_path, content: xml.target!)
end
def render_feed_post(post)
Views::FeedPostView.new(post:, site: @site).call
end
end
end
end

View file

@ -1,137 +0,0 @@
require "pressa/utils/file_writer"
require "pressa/views/layout"
require "pressa/views/post_view"
require "pressa/views/recent_posts_view"
require "pressa/views/archive_view"
require "pressa/views/year_posts_view"
require "pressa/views/month_posts_view"
module Pressa
module Posts
class PostWriter
def initialize(site:, posts_by_year:)
@site = site
@posts_by_year = posts_by_year
end
def write_posts(target_path:)
@posts_by_year.all_posts.each do |post|
write_post(post:, target_path:)
end
end
def write_recent_posts(target_path:, limit: 10)
recent = @posts_by_year.recent_posts(limit)
content_view = Views::RecentPostsView.new(posts: recent, site: @site)
html = render_layout(
page_subtitle: nil,
canonical_url: @site.url,
content: content_view,
page_description: "Recent posts",
page_type: "article"
)
file_path = File.join(target_path, "index.html")
Utils::FileWriter.write(path: file_path, content: html)
end
def write_archive(target_path:)
content_view = Views::ArchiveView.new(posts_by_year: @posts_by_year, site: @site)
html = render_layout(
page_subtitle: "Archive",
canonical_url: @site.url_for("/posts/"),
content: content_view,
page_description: "Archive of all posts"
)
file_path = File.join(target_path, "posts", "index.html")
Utils::FileWriter.write(path: file_path, content: html)
end
def write_year_indexes(target_path:)
@posts_by_year.sorted_years.each do |year|
year_posts = @posts_by_year.by_year[year]
write_year_index(year:, year_posts:, target_path:)
end
end
def write_month_rollups(target_path:)
@posts_by_year.by_year.each do |year, year_posts|
year_posts.by_month.each do |_month_num, month_posts|
write_month_rollup(year:, month_posts:, target_path:)
end
end
end
private
def write_post(post:, target_path:)
content_view = Views::PostView.new(post:, site: @site, article_class: "container")
html = render_layout(
page_subtitle: post.title,
canonical_url: @site.url_for(post.path),
content: content_view,
page_description: post.excerpt,
page_type: "article"
)
file_path = File.join(target_path, post.path.sub(/^\//, ""), "index.html")
Utils::FileWriter.write(path: file_path, content: html)
end
def write_year_index(year:, year_posts:, target_path:)
content_view = Views::YearPostsView.new(year:, year_posts:, site: @site)
html = render_layout(
page_subtitle: year.to_s,
canonical_url: @site.url_for("/posts/#{year}/"),
content: content_view,
page_description: "Archive of all posts from #{year}",
page_type: "article"
)
file_path = File.join(target_path, "posts", year.to_s, "index.html")
Utils::FileWriter.write(path: file_path, content: html)
end
def write_month_rollup(year:, month_posts:, target_path:)
month = month_posts.month
content_view = Views::MonthPostsView.new(year:, month_posts:, site: @site)
title = "#{month.name} #{year}"
html = render_layout(
page_subtitle: title,
canonical_url: @site.url_for("/posts/#{year}/#{month.padded}/"),
content: content_view,
page_description: "Archive of all posts from #{title}",
page_type: "article"
)
file_path = File.join(target_path, "posts", year.to_s, month.padded, "index.html")
Utils::FileWriter.write(path: file_path, content: html)
end
def render_layout(
page_subtitle:,
canonical_url:,
content:,
page_description: nil,
page_type: "website"
)
layout = Views::Layout.new(
site: @site,
page_subtitle:,
canonical_url:,
page_description:,
page_type:,
content:
)
layout.call
end
end
end
end

View file

@ -1,22 +0,0 @@
require "dry-struct"
require "pressa/site"
module Pressa
module Projects
class Project < Dry::Struct
attribute :name, Types::String
attribute :title, Types::String
attribute :description, Types::String
attribute :url, Types::String
def github_path
uri = URI.parse(url)
uri.path.sub(/^\//, "")
end
def path
"/projects/#{name}"
end
end
end
end

View file

@ -1,138 +0,0 @@
require "pressa/plugin"
require "pressa/utils/file_writer"
require "pressa/views/layout"
require "pressa/views/projects_view"
require "pressa/views/project_view"
require "pressa/projects/models"
module Pressa
module Projects
class HTMLPlugin < Pressa::Plugin
attr_reader :scripts, :styles
def initialize(projects: [], scripts: [], styles: [])
@projects = projects
@scripts = scripts
@styles = styles
end
def setup(site:, source_path:)
end
def render(site:, target_path:)
write_projects_index(site:, target_path:)
@projects.each do |project|
write_project_page(project:, site:, target_path:)
end
end
private
def write_projects_index(site:, target_path:)
content_view = Views::ProjectsView.new(projects: @projects, site:)
html = render_layout(
site:,
page_subtitle: "Projects",
canonical_url: site.url_for("/projects/"),
content: content_view
)
file_path = File.join(target_path, "projects", "index.html")
Utils::FileWriter.write(path: file_path, content: html)
end
def write_project_page(project:, site:, target_path:)
content_view = Views::ProjectView.new(project:, site:)
html = render_layout(
site:,
page_subtitle: project.title,
canonical_url: site.url_for(project.path),
content: content_view,
page_scripts: @scripts,
page_styles: @styles,
page_description: project.description
)
file_path = File.join(target_path, "projects", project.name, "index.html")
Utils::FileWriter.write(path: file_path, content: html)
end
def render_layout(
site:,
page_subtitle:,
canonical_url:,
content:,
page_scripts: [],
page_styles: [],
page_description: nil
)
layout = Views::Layout.new(
site:,
page_subtitle:,
canonical_url:,
page_scripts:,
page_styles:,
page_description:,
content:
)
layout.call
end
end
class GeminiPlugin < Pressa::Plugin
def initialize(projects: [])
@projects = projects
end
def setup(site:, source_path:)
end
def render(site:, target_path:)
write_projects_index(site:, target_path:)
@projects.each do |project|
write_project_page(project:, site:, target_path:)
end
end
private
def write_projects_index(site:, target_path:)
rows = ["# Projects", ""]
@projects.each do |project|
rows << "## #{project.title}"
rows << project.description
rows << "=> #{project.url}"
rows << ""
end
rows << "=> / Home"
rows << "=> #{site.url_for("/projects/")} Read on the web"
rows << ""
file_path = File.join(target_path, "projects", "index.gmi")
Utils::FileWriter.write(path: file_path, content: rows.join("\n"))
end
def write_project_page(project:, site:, target_path:)
rows = [
"# #{project.title}",
"",
project.description,
"",
"=> #{project.url}",
"=> /projects/ Back to projects",
""
]
file_path = File.join(target_path, "projects", project.name, "index.gmi")
Utils::FileWriter.write(path: file_path, content: rows.join("\n"))
end
end
Plugin = HTMLPlugin
end
end

View file

@ -1,76 +0,0 @@
require "dry-struct"
module Pressa
module Types
include Dry.Types()
end
class OutputLink < Dry::Struct
# label is required for HTML remote links, but Gemini home_links may omit it.
attribute :label, Types::String.optional.default(nil)
attribute :href, Types::String
attribute :icon, Types::String.optional.default(nil)
end
class Script < Dry::Struct
attribute :src, Types::String
attribute :defer, Types::Bool.default(true)
end
class Stylesheet < Dry::Struct
attribute :href, Types::String
end
class OutputOptions < Dry::Struct
attribute :public_excludes, Types::Array.of(Types::String).default([].freeze)
end
class HTMLOutputOptions < OutputOptions
attribute :remote_links, Types::Array.of(OutputLink).default([].freeze)
end
class GeminiOutputOptions < OutputOptions
attribute :recent_posts_limit, Types::Integer.default(20)
attribute :home_links, Types::Array.of(OutputLink).default([].freeze)
end
class Site < Dry::Struct
OUTPUT_OPTIONS = Types.Instance(OutputOptions)
attribute :author, Types::String
attribute :email, Types::String
attribute :title, Types::String
attribute :description, Types::String
attribute :url, Types::String
attribute :fediverse_creator, Types::String.optional.default(nil)
attribute :image_url, Types::String.optional.default(nil)
attribute :copyright_start_year, Types::Integer.optional.default(nil)
attribute :scripts, Types::Array.of(Script).default([].freeze)
attribute :styles, Types::Array.of(Stylesheet).default([].freeze)
attribute :plugins, Types::Array.default([].freeze)
attribute :renderers, Types::Array.default([].freeze)
attribute :output_format, Types::String.default("html".freeze).enum("html", "gemini")
attribute :output_options, OUTPUT_OPTIONS.default { HTMLOutputOptions.new }
def url_for(path)
"#{url}#{path}"
end
def image_url_for(path)
return nil unless image_url
"#{image_url}#{path}"
end
def public_excludes
output_options.public_excludes
end
def html_output_options
output_options if output_options.is_a?(HTMLOutputOptions)
end
def gemini_output_options
output_options if output_options.is_a?(GeminiOutputOptions)
end
end
end

View file

@ -1,147 +0,0 @@
require "fileutils"
require "pressa/utils/file_writer"
module Pressa
class SiteGenerator
attr_reader :site
def initialize(site:)
@site = site
end
def generate(source_path:, target_path:)
validate_paths!(source_path:, target_path:)
FileUtils.rm_rf(target_path)
FileUtils.mkdir_p(target_path)
setup_site = site
setup_site.plugins.each { |plugin| plugin.setup(site: setup_site, source_path:) }
@site = site_with_copyright_start_year(setup_site)
site.plugins.each { |plugin| plugin.render(site:, target_path:) }
copy_static_files(source_path, target_path)
process_public_directory(source_path, target_path)
end
private
def validate_paths!(source_path:, target_path:)
source_abs = absolute_path(source_path)
target_abs = absolute_path(target_path)
return unless contains_path?(container: target_abs, path: source_abs)
raise ArgumentError, "target_path must not be the same as or contain source_path"
end
def absolute_path(path)
File.exist?(path) ? File.realpath(path) : File.expand_path(path)
end
def contains_path?(container:, path:)
path == container || path.start_with?("#{container}#{File::SEPARATOR}")
end
def copy_static_files(source_path, target_path)
public_dir = File.join(source_path, "public")
return unless Dir.exist?(public_dir)
Dir.glob(File.join(public_dir, "**", "*"), File::FNM_DOTMATCH).each do |source_file|
next if File.directory?(source_file)
next if skip_file?(source_file)
next if skip_for_output_format?(source_file:, public_dir:)
filename = File.basename(source_file)
ext = File.extname(source_file)[1..]
if can_render?(filename, ext)
next
end
relative_path = source_file.sub("#{public_dir}/", "")
target_file = File.join(target_path, relative_path)
FileUtils.mkdir_p(File.dirname(target_file))
FileUtils.cp(source_file, target_file)
end
end
def can_render?(filename, ext)
site.renderers.any? { |renderer| renderer.can_render_file?(filename:, extension: ext) }
end
def process_public_directory(source_path, target_path)
public_dir = File.join(source_path, "public")
return unless Dir.exist?(public_dir)
site.renderers.each do |renderer|
Dir.glob(File.join(public_dir, "**", "*"), File::FNM_DOTMATCH).each do |source_file|
next if File.directory?(source_file)
next if skip_file?(source_file)
next if skip_for_output_format?(source_file:, public_dir:)
filename = File.basename(source_file)
ext = File.extname(source_file)[1..]
if renderer.can_render_file?(filename:, extension: ext)
dir_name = File.dirname(source_file)
relative_path = if dir_name == public_dir
""
else
dir_name.sub("#{public_dir}/", "")
end
target_dir = File.join(target_path, relative_path)
renderer.render(site:, file_path: source_file, target_dir:)
end
end
end
end
def skip_file?(source_file)
basename = File.basename(source_file)
basename.start_with?(".")
end
def skip_for_output_format?(source_file:, public_dir:)
relative_path = source_file.sub("#{public_dir}/", "")
site.public_excludes.any? do |pattern|
excluded_by_pattern?(relative_path:, pattern:)
end
end
def excluded_by_pattern?(relative_path:, pattern:)
normalized = pattern.sub(%r{\A/+}, "")
if normalized.end_with?("/**")
prefix = normalized.delete_suffix("/**")
return relative_path.start_with?("#{prefix}/") || relative_path == prefix
end
File.fnmatch?(normalized, relative_path, File::FNM_PATHNAME)
end
def site_with_copyright_start_year(base_site)
start_year = find_copyright_start_year(base_site)
attrs = base_site.to_h.merge(
output_options: base_site.output_options,
copyright_start_year: start_year
)
Site.new(**attrs)
end
def find_copyright_start_year(base_site)
years = base_site.plugins.filter_map do |plugin|
next unless plugin.respond_to?(:posts_by_year)
posts_by_year = plugin.posts_by_year
next unless posts_by_year.respond_to?(:earliest_year)
posts_by_year.earliest_year
end
years.min || Time.now.year
end
end
end

View file

@ -1,20 +0,0 @@
require "fileutils"
module Pressa
module Utils
class FileWriter
def self.write(path:, content:, permissions: 0o644)
FileUtils.mkdir_p(File.dirname(path))
File.write(path, content, mode: "w")
File.chmod(permissions, path)
end
def self.write_data(path:, data:, permissions: 0o644)
FileUtils.mkdir_p(File.dirname(path))
File.binwrite(path, data)
File.chmod(permissions, path)
end
end
end
end

View file

@ -1,67 +0,0 @@
require "yaml"
require "pressa/utils/file_writer"
require "pressa/utils/gemtext_renderer"
module Pressa
module Utils
class GeminiMarkdownRenderer
def can_render_file?(filename:, extension:)
extension == "md"
end
def render(site:, file_path:, target_dir:)
content = File.read(file_path)
metadata, body_markdown = parse_content(content)
page_title = presence(metadata["Title"]) || File.basename(file_path, ".md").capitalize
show_extension = ["true", "yes", true].include?(metadata["Show extension"])
slug = File.basename(file_path, ".md")
relative_dir = File.dirname(file_path).sub(/^.*?\/public\/?/, "")
relative_dir = "" if relative_dir == "."
canonical_html_path = if show_extension
"/#{relative_dir}/#{slug}.html".squeeze("/")
else
"/#{relative_dir}/#{slug}/".squeeze("/")
end
rows = ["# #{page_title}", ""]
gemtext_body = GemtextRenderer.render(body_markdown)
rows << gemtext_body unless gemtext_body.empty?
rows << "" unless rows.last.to_s.empty?
rows << "=> #{site.url_for(canonical_html_path)} Read on the web"
rows << ""
output_filename = if show_extension
"#{slug}.gmi"
else
File.join(slug, "index.gmi")
end
output_path = File.join(target_dir, output_filename)
FileWriter.write(path: output_path, content: rows.join("\n"))
end
private
def parse_content(content)
if content =~ /\A---\s*\n(.*?)\n---\s*\n(.*)/m
yaml_content = Regexp.last_match(1)
markdown = Regexp.last_match(2)
metadata = YAML.safe_load(yaml_content) || {}
[metadata, markdown]
else
[{}, content]
end
end
def presence(value)
return value unless value.respond_to?(:strip)
stripped = value.strip
stripped.empty? ? nil : stripped
end
end
end
end

View file

@ -1,257 +0,0 @@
require "cgi"
module Pressa
module Utils
class GemtextRenderer
class << self
def render(markdown)
lines = markdown.to_s.gsub("\r\n", "\n").split("\n")
link_reference_definitions = extract_link_reference_definitions(lines)
output_lines = []
in_preformatted_block = false
lines.each do |line|
if line.start_with?("```")
output_lines << "```"
in_preformatted_block = !in_preformatted_block
next
end
if in_preformatted_block
output_lines << line
next
end
next if link_reference_definition?(line)
converted_lines = convert_line(line, link_reference_definitions)
output_lines.concat(converted_lines)
end
squish_blank_lines(output_lines).join("\n").strip
end
private
def convert_line(line, link_reference_definitions)
stripped = line.strip
return [""] if stripped.empty?
return convert_heading(stripped, link_reference_definitions) if heading_line?(stripped)
return convert_list_item(stripped, link_reference_definitions) if list_item_line?(stripped)
return convert_quote_line(stripped, link_reference_definitions) if quote_line?(stripped)
convert_text_line(line, link_reference_definitions)
end
def convert_heading(line, link_reference_definitions)
marker, text = line.split(/\s+/, 2)
heading_text, links = extract_links(text.to_s, link_reference_definitions)
rows = []
rows << "#{marker} #{clean_inline_text(heading_text)}".strip
rows.concat(render_link_rows(links))
rows
end
def convert_list_item(line, link_reference_definitions)
text = line.sub(/\A[-*+]\s+/, "")
if link_only_list_item?(text, link_reference_definitions)
_clean_text, links = extract_links(text, link_reference_definitions)
return render_link_rows(links)
end
clean_text, links = extract_links(text, link_reference_definitions)
rows = []
rows << "* #{clean_inline_text(clean_text)}".strip
rows.concat(render_link_rows(links))
rows
end
def convert_quote_line(line, link_reference_definitions)
text = line.sub(/\A>\s?/, "")
clean_text, links = extract_links(text, link_reference_definitions)
rows = []
rows << "> #{clean_inline_text(clean_text)}".strip
rows.concat(render_link_rows(links))
rows
end
def convert_text_line(line, link_reference_definitions)
clean_text, links = extract_links(line, link_reference_definitions)
if !links.empty? && clean_inline_text(strip_links_from_text(line)).empty?
return render_link_rows(links)
end
rows = []
inline_text = clean_inline_text(clean_text)
rows << inline_text unless inline_text.empty?
rows.concat(render_link_rows(links))
rows.empty? ? [""] : rows
end
def extract_links(text, link_reference_definitions)
links = []
work = text.dup
work.gsub!(%r{<a\s+[^>]*href=["']([^"']+)["'][^>]*>(.*?)</a>}i) do
url = Regexp.last_match(1)
label = clean_inline_text(strip_html_tags(Regexp.last_match(2)))
links << [url, label]
label
end
work.gsub!(/\[([^\]]+)\]\(([^)\s]+)(?:\s+"[^"]*")?\)/) do
label = clean_inline_text(Regexp.last_match(1))
url = Regexp.last_match(2)
links << [url, label]
label
end
work.gsub!(/\[([^\]]+)\]\[([^\]]*)\]/) do
label_text = Regexp.last_match(1)
reference_key = Regexp.last_match(2)
reference_key = label_text if reference_key.strip.empty?
url = resolve_link_reference(link_reference_definitions, reference_key)
next Regexp.last_match(0) unless url
label = clean_inline_text(label_text)
links << [url, label]
label
end
work.scan(/(?:href|src)=["']([^"']+)["']/i) do |match|
url = match.first
next if links.any? { |(existing_url, _)| existing_url == url }
links << [url, fallback_label(url)]
end
[work, links]
end
def resolve_link_reference(link_reference_definitions, key)
link_reference_definitions[normalize_link_reference_key(key)]
end
def link_only_list_item?(text, link_reference_definitions)
_clean_text, links = extract_links(text, link_reference_definitions)
return false if links.empty?
remaining_text = strip_links_from_text(text)
normalized_remaining = clean_inline_text(remaining_text)
return true if normalized_remaining.empty?
links_count = links.length
links_count == 1 && normalized_remaining.match?(/\A[\w@.+\-\/ ]+:\z/)
end
def extract_link_reference_definitions(lines)
links = {}
lines.each do |line|
match = line.match(/\A\s{0,3}\[([^\]]+)\]:\s*(\S+)/)
next unless match
key = normalize_link_reference_key(match[1])
value = match[2]
value = value[1..-2] if value.start_with?("<") && value.end_with?(">")
links[key] = value
end
links
end
def normalize_link_reference_key(key)
key.to_s.strip.downcase.gsub(/\s+/, " ")
end
def strip_links_from_text(text)
work = text.dup
work.gsub!(%r{<a\s+[^>]*href=["'][^"']+["'][^>]*>.*?</a>}i, "")
work.gsub!(/\[([^\]]+)\]\(([^)\s]+)(?:\s+"[^"]*")?\)/, "")
work.gsub!(/\[([^\]]+)\]\[([^\]]*)\]/, "")
work
end
def render_link_rows(links)
links.filter_map do |url, label|
next nil if url.nil? || url.strip.empty?
"=> #{url}"
end
end
def clean_inline_text(text)
cleaned = text.to_s.dup
cleaned = strip_html_tags(cleaned)
cleaned.gsub!(/`([^`]+)`/, '\1')
cleaned.gsub!(/\*\*([^*]+)\*\*/, '\1')
cleaned.gsub!(/__([^_]+)__/, '\1')
cleaned.gsub!(/\*([^*]+)\*/, '\1')
cleaned.gsub!(/_([^_]+)_/, '\1')
cleaned.gsub!(/\s+/, " ")
cleaned = CGI.unescapeHTML(cleaned)
cleaned = decode_named_html_entities(cleaned)
cleaned.strip
end
def decode_named_html_entities(text)
text.gsub(/&([A-Za-z]+);/) do
entity = Regexp.last_match(1).downcase
case entity
when "darr" then "\u2193"
when "uarr" then "\u2191"
when "larr" then "\u2190"
when "rarr" then "\u2192"
when "hellip" then "..."
when "nbsp" then " "
else
"&#{Regexp.last_match(1)};"
end
end
end
def strip_html_tags(text)
text.gsub(/<[^>]+>/, "")
end
def fallback_label(url)
uri_path = url.split("?").first
basename = File.basename(uri_path.to_s)
return url if basename.nil? || basename.empty? || basename == "/"
basename
end
def heading_line?(line)
line.match?(/\A\#{1,3}\s+/)
end
def list_item_line?(line)
line.match?(/\A[-*+]\s+/)
end
def quote_line?(line)
line.start_with?(">")
end
def link_reference_definition?(line)
line.match?(/\A\s{0,3}\[[^\]]+\]:\s+\S/)
end
def squish_blank_lines(lines)
output = []
previous_blank = false
lines.each do |line|
blank = line.strip.empty?
next if blank && previous_blank
output << line
previous_blank = blank
end
output
end
end
end
end
end

View file

@ -1,148 +0,0 @@
require "kramdown"
require "yaml"
require "pressa/utils/file_writer"
require "pressa/site"
require "pressa/views/layout"
require "pressa/views/icons"
module Pressa
module Utils
class MarkdownRenderer
EXCERPT_LENGTH = 300
def can_render_file?(filename:, extension:)
extension == "md"
end
def render(site:, file_path:, target_dir:)
content = File.read(file_path)
metadata, body_markdown = parse_content(content)
html_body = render_markdown(body_markdown)
page_title = presence(metadata["Title"]) || File.basename(file_path, ".md").capitalize
page_type = presence(metadata["Page type"]) || "website"
page_description = presence(metadata["Description"]) || generate_excerpt(body_markdown)
show_extension = ["true", "yes", true].include?(metadata["Show extension"])
slug = File.basename(file_path, ".md")
relative_dir = File.dirname(file_path).sub(/^.*?\/public\/?/, "")
relative_dir = "" if relative_dir == "."
canonical_path = if show_extension
"/#{relative_dir}/#{slug}.html".squeeze("/")
else
"/#{relative_dir}/#{slug}/".squeeze("/")
end
html = render_layout(
site:,
page_subtitle: page_title,
canonical_url: site.url_for(canonical_path),
body: html_body,
page_description:,
page_type:
)
output_filename = if show_extension
"#{slug}.html"
else
File.join(slug, "index.html")
end
output_path = File.join(target_dir, output_filename)
FileWriter.write(path: output_path, content: html)
end
private
def parse_content(content)
if content =~ /\A---\s*\n(.*?)\n---\s*\n(.*)/m
yaml_content = $1
markdown = $2
metadata = YAML.safe_load(yaml_content) || {}
[metadata, markdown]
else
[{}, content]
end
end
def render_markdown(markdown)
Kramdown::Document.new(
markdown,
input: "GFM",
hard_wrap: false,
syntax_highlighter: "rouge",
syntax_highlighter_opts: {
line_numbers: false,
wrap: true
}
).to_html
end
def render_layout(site:, page_subtitle:, canonical_url:, body:, page_description:, page_type:)
layout = Views::Layout.new(
site:,
page_subtitle:,
canonical_url:,
page_description:,
page_type:,
content: PageView.new(page_title: page_subtitle, body:)
)
layout.call
end
class PageView < Phlex::HTML
def initialize(page_title:, body:)
@page_title = page_title
@body = body
end
def view_template
article(class: "container") do
h1 { @page_title }
raw(safe(@body))
end
div(class: "row clearfix") do
p(class: "fin") do
raw(safe(Views::Icons.code))
end
end
end
end
def generate_excerpt(markdown)
text = markdown.dup
# Drop inline and reference-style images before links are simplified.
text.gsub!(/!\[[^\]]*\]\([^)]+\)/, "")
text.gsub!(/!\[[^\]]*\]\[[^\]]+\]/, "")
# Replace inline and reference links with just their text.
text.gsub!(/\[([^\]]+)\]\([^)]+\)/, '\1')
text.gsub!(/\[([^\]]+)\]\[[^\]]+\]/, '\1')
# Remove link reference definitions such as: [foo]: http://example.com
text.gsub!(/(?m)^\[[^\]]+\]:\s*\S.*$/, "")
text.gsub!(/<[^>]+>/, "")
text.gsub!(/\s+/, " ")
text.strip!
return nil if text.empty?
"#{text[0...EXCERPT_LENGTH]}..."
end
def presence(value)
return value unless value.respond_to?(:strip)
stripped = value.strip
stripped.empty? ? nil : stripped
end
end
end
end

View file

@ -1,24 +0,0 @@
require "phlex"
require "pressa/views/year_posts_view"
module Pressa
module Views
class ArchiveView < Phlex::HTML
def initialize(posts_by_year:, site:)
@posts_by_year = posts_by_year
@site = site
end
def view_template
div(class: "container") do
h1 { "Archive" }
end
@posts_by_year.sorted_years.each do |year|
year_posts = @posts_by_year.by_year[year]
render Views::YearPostsView.new(year:, year_posts:, site: @site)
end
end
end
end
end

View file

@ -1,33 +0,0 @@
require "phlex"
module Pressa
module Views
class FeedPostView < Phlex::HTML
def initialize(post:, site:)
@post = post
@site = site
end
def view_template
div do
p(class: "time") { @post.formatted_date }
raw(safe(normalized_body))
p do
a(class: "permalink", href: @site.url_for(@post.path)) { "" }
end
end
end
private
def normalized_body
@post.body.gsub(/(href|src)=(['"])(\/(?!\/)[^'"]*)\2/) do
attr = Regexp.last_match(1)
quote = Regexp.last_match(2)
path = Regexp.last_match(3)
%(#{attr}=#{quote}#{@site.url_for(path)}#{quote})
end
end
end
end
end

View file

@ -1,34 +0,0 @@
module Pressa
module Views
module Icons
module_function
def mastodon
svg(class_name: "icon icon-mastodon", view_box: "0 0 448 512", path: IconPath::MASTODON)
end
def github
svg(class_name: "icon icon-github", view_box: "0 0 496 512", path: IconPath::GITHUB)
end
def rss
svg(class_name: "icon icon-rss", view_box: "0 0 448 512", path: IconPath::RSS)
end
def code
svg(class_name: "icon icon-code", view_box: "0 0 640 512", path: IconPath::CODE)
end
private_class_method def svg(class_name:, view_box:, path:)
"<svg class=\"#{class_name}\" viewBox=\"#{view_box}\" aria-hidden=\"true\" focusable=\"false\"><path transform=\"translate(0,448) scale(1,-1)\" d=\"#{path}\"/></svg>"
end
module IconPath
MASTODON = "M433 268.89c0 0 0.799805 -71.6992 -9 -121.5c-6.23047 -31.5996 -55.1104 -66.1992 -111.23 -72.8994c-20.0996 -2.40039 -93.1191 -14.2002 -178.75 6.7002c0 -0.116211 -0.00390625 -0.119141 -0.00390625 -0.235352c0 -4.63281 0.307617 -9.19434 0.904297 -13.665 c6.62988 -49.5996 49.2197 -52.5996 89.6299 -54c40.8105 -1.2998 77.1201 10.0996 77.1201 10.0996l1.7002 -36.8994s-28.5098 -15.2998 -79.3203 -18.1006c-28.0098 -1.59961 -62.8193 0.700195 -103.33 11.4004c-112.229 29.7002 -105.63 173.4 -105.63 289.1 c0 97.2002 63.7197 125.7 63.7197 125.7c61.9209 28.4004 227.96 28.7002 290.48 0c0 0 63.71 -28.5 63.71 -125.7zM357.88 143.69c0 122 5.29004 147.71 -18.4199 175.01c-25.71 28.7002 -79.7197 31 -103.83 -6.10059l-11.5996 -19.5l-11.6006 19.5 c-24.0098 36.9004 -77.9297 35 -103.83 6.10059c-23.6094 -27.1006 -18.4092 -52.9004 -18.4092 -175h46.7295v114.2c0 49.6992 64 51.5996 64 -6.90039v-62.5098h46.3301v62.5c0 58.5 64 56.5996 64 6.89941v-114.199h46.6299z"
GITHUB = "M165.9 50.5996c0 -2 -2.30078 -3.59961 -5.2002 -3.59961c-3.2998 -0.299805 -5.60059 1.2998 -5.60059 3.59961c0 2 2.30078 3.60059 5.2002 3.60059c3 0.299805 5.60059 -1.2998 5.60059 -3.60059zM134.8 55.0996c0.700195 2 3.60059 3 6.2002 2.30078 c3 -0.900391 4.90039 -3.2002 4.2998 -5.2002c-0.599609 -2 -3.59961 -3 -6.2002 -2c-3 0.599609 -5 2.89941 -4.2998 4.89941zM179 56.7998c2.90039 0.299805 5.59961 -1 5.90039 -2.89941c0.299805 -2 -1.7002 -3.90039 -4.60059 -4.60059 c-3 -0.700195 -5.59961 0.600586 -5.89941 2.60059c-0.300781 2.2998 1.69922 4.19922 4.59961 4.89941zM244.8 440c138.7 0 251.2 -105.3 251.2 -244c0 -110.9 -67.7998 -205.8 -167.8 -239c-12.7002 -2.2998 -17.2998 5.59961 -17.2998 12.0996 c0 8.2002 0.299805 49.9004 0.299805 83.6006c0 23.5 -7.7998 38.5 -17 46.3994c55.8994 6.30078 114.8 14 114.8 110.5c0 27.4004 -9.7998 41.2002 -25.7998 58.9004c2.59961 6.5 11.0996 33.2002 -2.60059 67.9004c-20.8994 6.59961 -69 -27 -69 -27 c-20 5.59961 -41.5 8.5 -62.7998 8.5s-42.7998 -2.90039 -62.7998 -8.5c0 0 -48.0996 33.5 -69 27c-13.7002 -34.6006 -5.2002 -61.4004 -2.59961 -67.9004c-16 -17.5996 -23.6006 -31.4004 -23.6006 -58.9004c0 -96.1992 56.4004 -104.3 112.3 -110.5 c-7.19922 -6.59961 -13.6992 -17.6992 -16 -33.6992c-14.2998 -6.60059 -51 -17.7002 -72.8994 20.8994c-13.7002 23.7998 -38.6006 25.7998 -38.6006 25.7998c-24.5 0.300781 -1.59961 -15.3994 -1.59961 -15.3994c16.4004 -7.5 27.7998 -36.6006 27.7998 -36.6006 c14.7002 -44.7998 84.7002 -29.7998 84.7002 -29.7998c0 -21 0.299805 -55.2002 0.299805 -61.3994c0 -6.5 -4.5 -14.4004 -17.2998 -12.1006c-99.7002 33.4004 -169.5 128.3 -169.5 239.2c0 138.7 106.1 244 244.8 244zM97.2002 95.0996 c1.2998 1.30078 3.59961 0.600586 5.2002 -1c1.69922 -1.89941 2 -4.19922 0.699219 -5.19922c-1.2998 -1.30078 -3.59961 -0.600586 -5.19922 1c-1.7002 1.89941 -2 4.19922 -0.700195 5.19922zM86.4004 103.2c0.699219 1 2.2998 1.2998 4.2998 0.700195 c2 -1 3 -2.60059 2.2998 -3.90039c-0.700195 -1.40039 -2.7002 -1.7002 -4.2998 -0.700195c-2 1 -3 2.60059 -2.2998 3.90039zM118.8 67.5996c1.2998 1.60059 4.2998 1.30078 6.5 -1c2 -1.89941 2.60059 -4.89941 1.2998 -6.19922 c-1.2998 -1.60059 -4.19922 -1.30078 -6.5 1c-2.2998 1.89941 -2.89941 4.89941 -1.2998 6.19922zM107.4 82.2998c1.59961 1.2998 4.19922 0.299805 5.59961 -2c1.59961 -2.2998 1.59961 -4.89941 0 -6.2002c-1.2998 -1 -4 0 -5.59961 2.30078 c-1.60059 2.2998 -1.60059 4.89941 0 5.89941z"
RSS = "M128.081 32.041c0 -35.3691 -28.6719 -64.041 -64.041 -64.041s-64.04 28.6719 -64.04 64.041s28.6719 64.041 64.041 64.041s64.04 -28.6729 64.04 -64.041zM303.741 -15.209c0.494141 -9.13477 -6.84668 -16.791 -15.9951 -16.79h-48.0693 c-8.41406 0 -15.4707 6.49023 -16.0176 14.8867c-7.29883 112.07 -96.9404 201.488 -208.772 208.772c-8.39648 0.545898 -14.8867 7.60254 -14.8867 16.0176v48.0693c0 9.14746 7.65625 16.4883 16.791 15.9941c154.765 -8.36328 278.596 -132.351 286.95 -286.95z M447.99 -15.4971c0.324219 -9.03027 -6.97168 -16.5029 -16.0049 -16.5039h-48.0684c-8.62598 0 -15.6455 6.83496 -15.999 15.4531c-7.83789 191.148 -161.286 344.626 -352.465 352.465c-8.61816 0.354492 -15.4531 7.37402 -15.4531 15.999v48.0684 c0 9.03418 7.47266 16.3301 16.5029 16.0059c234.962 -8.43555 423.093 -197.667 431.487 -431.487z"
CODE = "M278.9 -63.5l-61 17.7002c-6.40039 1.7998 -10 8.5 -8.2002 14.8994l136.5 470.2c1.7998 6.40039 8.5 10 14.8994 8.2002l61 -17.7002c6.40039 -1.7998 10 -8.5 8.2002 -14.8994l-136.5 -470.2c-1.89941 -6.40039 -8.5 -10.1006 -14.8994 -8.2002zM164.9 48.7002 c-4.5 -4.90039 -12.1006 -5.10059 -17 -0.5l-144.101 135.1c-5.09961 4.7002 -5.09961 12.7998 0 17.5l144.101 135c4.89941 4.60059 12.5 4.2998 17 -0.5l43.5 -46.3994c4.69922 -4.90039 4.2998 -12.7002 -0.800781 -17.2002l-90.5996 -79.7002l90.5996 -79.7002 c5.10059 -4.5 5.40039 -12.2998 0.800781 -17.2002zM492.1 48.0996c-4.89941 -4.5 -12.5 -4.2998 -17 0.600586l-43.5 46.3994c-4.69922 4.90039 -4.2998 12.7002 0.800781 17.2002l90.5996 79.7002l-90.5996 79.7998c-5.10059 4.5 -5.40039 12.2998 -0.800781 17.2002 l43.5 46.4004c4.60059 4.7998 12.2002 5 17 0.5l144.101 -135.2c5.09961 -4.7002 5.09961 -12.7998 0 -17.5z"
end
end
end
end

View file

@ -1,347 +0,0 @@
require "phlex"
require "pressa/views/icons"
module Pressa
module Views
class Layout < Phlex::HTML
attr_reader :site,
:page_subtitle,
:page_description,
:page_type,
:canonical_url,
:page_scripts,
:page_styles,
:content
def initialize(
site:,
canonical_url:, page_subtitle: nil,
page_description: nil,
page_type: "website",
page_scripts: [],
page_styles: [],
content: nil
)
@site = site
@page_subtitle = page_subtitle
@page_description = page_description
@page_type = page_type
@canonical_url = canonical_url
@page_scripts = page_scripts
@page_styles = page_styles
@content = content
end
def view_template
doctype
html(lang: "en") do
comment { "meow" }
head do
meta(charset: "UTF-8")
title { full_title }
meta(name: "twitter:title", content: full_title)
meta(property: "og:title", content: full_title)
meta(name: "description", content: description)
meta(name: "twitter:description", content: description)
meta(property: "og:description", content: description)
meta(property: "og:site_name", content: site.title)
link(rel: "canonical", href: canonical_url)
meta(name: "twitter:url", content: canonical_url)
meta(property: "og:url", content: canonical_url)
meta(property: "og:image", content: og_image_url) if og_image_url
meta(property: "og:type", content: page_type)
meta(property: "article:author", content: site.author)
meta(name: "twitter:card", content: "summary")
link(
rel: "alternate",
href: site.url_for("/feed.xml"),
type: "application/rss+xml",
title: site.title
)
link(
rel: "alternate",
href: site.url_for("/feed.json"),
type: "application/json",
title: site.title
)
meta(name: "fediverse:creator", content: site.fediverse_creator) if site.fediverse_creator
link(rel: "author", type: "text/plain", href: site.url_for("/humans.txt"))
link(rel: "icon", type: "image/png", href: site.url_for("/images/favicon-32x32.png"))
link(rel: "shortcut icon", href: site.url_for("/images/favicon.icon"))
link(rel: "apple-touch-icon", href: site.url_for("/images/apple-touch-icon.png"))
link(rel: "mask-icon", color: "#aa0000", href: site.url_for("/images/safari-pinned-tab.svg"))
link(rel: "manifest", href: site.url_for("/images/manifest.json"))
meta(name: "msapplication-config", content: site.url_for("/images/browserconfig.xml"))
meta(name: "theme-color", content: "#121212")
meta(name: "viewport", content: "width=device-width, initial-scale=1.0, viewport-fit=cover")
link(rel: "dns-prefetch", href: "https://gist.github.com")
all_styles.each do |style|
link(rel: "stylesheet", type: "text/css", href: style_href(style.href))
end
end
body do
render_header
render(content) if content
render_footer
render_scripts
end
end
end
private
def description
page_description || site.description
end
def full_title
return site.title unless page_subtitle
"#{site.title}: #{page_subtitle}"
end
def og_image_url
site.image_url
end
def all_styles
site.styles + page_styles
end
def all_scripts
site.scripts + page_scripts
end
def render_header
header(class: "primary") do
div(class: "title") do
h1 do
a(href: site.url) { site.title }
end
h4 do
plain "By "
a(href: site.url_for("/about")) { site.author }
end
end
nav(class: "remote") do
ul do
remote_nav_links.each do |link|
li(class: remote_link_class(link)) do
attrs = {"aria-label": link.label, href: remote_link_href(link.href)}
attrs[:rel] = "me" if mastodon_link?(link)
a(**attrs) do
icon_markup = remote_link_icon_markup(link)
if icon_markup
raw(safe(icon_markup))
else
plain link.label
end
end
end
end
end
end
nav(class: "local") do
ul do
li { a(href: site.url_for("/about")) { "About" } }
li { a(href: site.url_for("/posts")) { "Archive" } }
li { a(href: site.url_for("/projects")) { "Projects" } }
end
end
div(class: "clearfix")
end
end
def render_footer
footer do
plain "© #{footer_years} "
a(href: site.url_for("/about")) { site.author }
end
end
def render_scripts
all_scripts.each do |scr|
attrs = {src: script_src(scr.src)}
attrs[:defer] = true if scr.defer
script(**attrs)
end
render_gemini_fallback_script
end
def render_gemini_fallback_script
# Inline so the behavior ships with the base HTML layout without needing
# separate asset management for one small handler.
script do
raw(safe(<<~JS))
(function () {
function isPlainLeftClick(e) {
return (
e.button === 0 &&
!e.defaultPrevented &&
!e.metaKey &&
!e.ctrlKey &&
!e.shiftKey &&
!e.altKey
);
}
function setupGeminiFallback() {
var links = document.querySelectorAll(
'header.primary nav.remote a[href^="gemini://"]'
);
if (!links || links.length === 0) return;
for (var i = 0; i < links.length; i++) {
(function (link) {
link.addEventListener("click", function (e) {
if (!isPlainLeftClick(e)) return;
e.preventDefault();
var geminiHref = link.getAttribute("href");
var fallbackHref = "https://geminiprotocol.net";
var done = false;
var fallbackTimer = null;
function cleanup() {
if (fallbackTimer) window.clearTimeout(fallbackTimer);
document.removeEventListener("visibilitychange", onVisibilityChange);
window.removeEventListener("pagehide", onPageHide);
window.removeEventListener("blur", onBlur);
}
function markDone() {
done = true;
cleanup();
}
function onVisibilityChange() {
// If a handler opens and the browser backgrounded, consider it "successful".
if (document.visibilityState === "hidden") markDone();
}
function onPageHide() {
markDone();
}
function onBlur() {
// Some browsers blur the page when a protocol handler is invoked.
markDone();
}
document.addEventListener("visibilitychange", onVisibilityChange);
window.addEventListener("pagehide", onPageHide, { once: true });
window.addEventListener("blur", onBlur, { once: true });
// If we're still here shortly after attempting navigation, assume it failed.
fallbackTimer = window.setTimeout(function () {
if (done) return;
window.location.href = fallbackHref;
}, 900);
window.location.href = geminiHref;
});
})(links[i]);
}
}
if (document.readyState === "loading") {
document.addEventListener("DOMContentLoaded", setupGeminiFallback);
} else {
setupGeminiFallback();
}
})();
JS
end
end
def script_src(src)
return src if src.start_with?("http://", "https://")
absolute_asset(src)
end
def style_href(href)
return href if href.start_with?("http://", "https://")
absolute_asset(href)
end
def absolute_asset(path)
normalized = path.start_with?("/") ? path : "/#{path}"
site.url_for(normalized)
end
def footer_years
current_year = Time.now.year
start_year = site.copyright_start_year || current_year
return current_year.to_s if start_year >= current_year
"#{start_year} - #{current_year}"
end
def html_remote_links
site.html_output_options&.remote_links || []
end
def remote_nav_links
html_remote_links
end
def remote_link_href(href)
return href if href.match?(/\A[a-z][a-z0-9+\-.]*:/i)
absolute_asset(href)
end
def remote_link_class(link)
slug = link.icon || link.label.downcase.gsub(/[^a-z0-9]+/, "-").gsub(/^-|-$/, "")
"remote-link #{slug}"
end
def remote_link_icon_markup(link)
# Gemini doesn't have an obvious, widely-recognized protocol icon.
# Use a simple custom SVG mark so it aligns like the other SVG icons.
if link.icon == "gemini"
return <<~SVG.strip
<svg class="icon icon-gemini-protocol" viewBox="0 0 24 24" aria-hidden="true" focusable="false">
<path transform="translate(12 12) scale(0.84 1.04) translate(-12 -12)" d="M18,5.3C19.35,4.97 20.66,4.54 21.94,4L21.18,2.14C18.27,3.36 15.15,4 12,4C8.85,4 5.73,3.38 2.82,2.17L2.06,4C3.34,4.54 4.65,4.97 6,5.3V18.7C4.65,19.03 3.34,19.46 2.06,20L2.82,21.86C8.7,19.42 15.3,19.42 21.18,21.86L21.94,20C20.66,19.46 19.35,19.03 18,18.7V5.3M8,18.3V5.69C9.32,5.89 10.66,6 12,6C13.34,6 14.68,5.89 16,5.69V18.31C13.35,17.9 10.65,17.9 8,18.31V18.3Z"/>
</svg>
SVG
end
icon_renderer = remote_link_icon_renderer(link.icon)
return nil unless icon_renderer
Icons.public_send(icon_renderer)
end
def remote_link_icon_renderer(icon)
case icon
when "mastodon" then :mastodon
when "github" then :github
when "rss" then :rss
when "code" then :code
end
end
def mastodon_link?(link)
link.icon == "mastodon"
end
end
end
end

View file

@ -1,26 +0,0 @@
require "phlex"
require "pressa/views/post_view"
module Pressa
module Views
class MonthPostsView < Phlex::HTML
def initialize(year:, month_posts:, site:)
@year = year
@month_posts = month_posts
@site = site
end
def view_template
div(class: "container") do
h1 { "#{@month_posts.month.name} #{@year}" }
end
@month_posts.sorted_posts.each do |post|
div(class: "container") do
render PostView.new(post:, site: @site)
end
end
end
end
end
end

View file

@ -1,46 +0,0 @@
require "phlex"
require "pressa/views/icons"
module Pressa
module Views
class PostView < Phlex::HTML
def initialize(post:, site:, article_class: nil)
@post = post
@site = site
@article_class = article_class
end
def view_template
article(**article_attributes) do
header do
h2 do
if @post.link_post?
a(href: @post.link) { "#{@post.title}" }
else
a(href: @post.path) { @post.title }
end
end
time { @post.formatted_date }
a(href: @post.path, class: "permalink") { "" }
end
raw(safe(@post.body))
end
div(class: "row clearfix") do
p(class: "fin") do
raw(safe(Icons.code))
end
end
end
private
def article_attributes
return {} unless @article_class
{class: @article_class}
end
end
end
end

View file

@ -1,63 +0,0 @@
require "phlex"
require "pressa/views/icons"
module Pressa
module Views
class ProjectView < Phlex::HTML
def initialize(project:, site:)
@project = project
@site = site
end
def view_template
article(class: "container project") do
h1(id: "project", data: {title: @project.title}) { @project.title }
h4 { @project.description }
div(class: "project-stats") do
p do
a(href: @project.url) { "GitHub" }
plain ""
a(id: "nstar", href: stargazers_url)
plain ""
a(id: "nfork", href: network_url)
end
p do
plain "Last updated on "
span(id: "updated")
end
end
div(class: "project-info row clearfix") do
div(class: "column half") do
h3 { "Contributors" }
div(id: "contributors")
end
div(class: "column half") do
h3 { "Languages" }
div(id: "langs")
end
end
end
div(class: "row clearfix") do
p(class: "fin") do
raw(safe(Icons.code))
end
end
end
private
def stargazers_url
"#{@project.url}/stargazers"
end
def network_url
"#{@project.url}/network/members"
end
end
end
end

View file

@ -1,34 +0,0 @@
require "phlex"
require "pressa/views/icons"
module Pressa
module Views
class ProjectsView < Phlex::HTML
def initialize(projects:, site:)
@projects = projects
@site = site
end
def view_template
article(class: "container") do
h1 { "Projects" }
@projects.each do |project|
div(class: "project-listing") do
h4 do
a(href: @site.url_for(project.path)) { project.title }
end
p(class: "description") { project.description }
end
end
end
div(class: "row clearfix") do
p(class: "fin") do
raw(safe(Icons.code))
end
end
end
end
end
end

View file

@ -1,21 +0,0 @@
require "phlex"
require "pressa/views/post_view"
module Pressa
module Views
class RecentPostsView < Phlex::HTML
def initialize(posts:, site:)
@posts = posts
@site = site
end
def view_template
div(class: "container") do
@posts.each do |post|
render PostView.new(post:, site: @site)
end
end
end
end
end
end

View file

@ -1,66 +0,0 @@
require "phlex"
module Pressa
module Views
class YearPostsView < Phlex::HTML
def initialize(year:, year_posts:, site:)
@year = year
@year_posts = year_posts
@site = site
end
def view_template
div(class: "container") do
h2(class: "year") do
a(href: year_path) { @year.to_s }
end
@year_posts.sorted_months.each do |month_posts|
render_month(month_posts)
end
end
end
private
def year_path
@site.url_for("/posts/#{@year}/")
end
def render_month(month_posts)
month = month_posts.month
h3(class: "month") do
a(href: @site.url_for("/posts/#{@year}/#{month.padded}/")) do
month.name
end
end
ul(class: "archive") do
month_posts.sorted_posts.each do |post|
render_post_item(post)
end
end
end
def render_post_item(post)
if post.link_post?
li do
a(href: post.link) { "#{post.title}" }
time { short_date(post.date) }
a(class: "permalink", href: post.path) { "" }
end
else
li do
a(href: post.path) { post.title }
time { short_date(post.date) }
end
end
end
def short_date(date)
date.strftime("%-d %b")
end
end
end
end

22
package.json Normal file
View file

@ -0,0 +1,22 @@
{ "name" : "samhuri.net"
, "description" : "samhuri.net"
, "version" : "1.0.0"
, "homepage" : "http://samhuri.net/proj/samhuri.net"
, "author" : "Sami Samhuri <sami@samhuri.net>"
, "repository" :
{ "type" : "git"
, "url" : "https://github.com/samsonjs/samhuri.net.git"
}
, "bugs" :
{ "mail" : "sami@samhuri.net"
, "url" : "https://github.com/samsonjs/samhuri.net/issues"
}
, "dependencies" : { "mustache" : "0.3.x" }
, "bin" : { "discussd" : "./discussd/discussd.js" }
, "engines" : { "node" : ">=0.2.0" }
, "licenses" :
[ { "type" : "MIT"
, "url" : "http://github.com/samsonjs/samhuri.net/raw/master/LICENSE"
}
]
}

View file

@ -1,12 +0,0 @@
---
Title: "First Post!"
Author: Sami Samhuri
Date: "8th February, 2006"
Timestamp: 2006-02-07T19:21:00-08:00
Tags: life
---
so it's 2am and i should be asleep, but instead i'm setting up a blog. i got a new desk last night and so today i finally got my apartment re-arranged and it's much better now. that's it for now... time to sleep.
(speaking of sleep, this new [sleeping bag](http://www.musuchouse.com/) design makes so much sense. awesome.)

View file

@ -1,14 +0,0 @@
---
Title: "Girlfriend X"
Author: Sami Samhuri
Date: "18th February, 2006"
Timestamp: 2006-02-18T11:50:00-08:00
Tags: crazy, funny
---
This is hilarious! Someone wrote software that manages a "parallel" dating style.
> In addition to storing each woman's contact information and picture, the Girlfriend profiles include a Score Card where you track her sexual preferences, her menstrual cycles and how she styles her pubic hair.
It's called [Girlfriend X](http://www.wired.com/news/columns/0,70231-0.html), but that's a link to an article about it. I didn't go to the actual website. I just thing it's amusing someone went through the trouble to do this. Maybe there's a demand for it. *\*shrug\**

View file

@ -1,46 +0,0 @@
---
Title: "Intelligent Migration Snippets 0.1 for TextMate"
Author: Sami Samhuri
Date: "22nd February, 2006"
Timestamp: 2006-02-22T03:28:00-08:00
Tags: mac os x, textmate, rails, hacking, migrations, snippets
---
*This should be working now. I've tested it under a new user account here.*
*This does requires the syncPeople bundle to be installed to work. That's ok, because you should get the [syncPeople on Rails bundle][syncPeople] anyways.*
When writing database migrations in Ruby on Rails it is common to create a table in the `self.up` method and then drop it in `self.down`. The same goes for adding, removing and renaming columns.
I wrote a Ruby program to insert code into both methods with a single snippet. All the TextMate commands and macros that you need are included.
### See it in action ###
I think this looks cool in action. Plus I like to show off what what TextMate can do to people who may not use it, or don't have a Mac. It's just over 30 seconds long and weighs in at around 700kb.
<p style="text-align: center">
<img src="/images/download.png" title="Download" alt="Download">
<a href="/f/ims-demo.mov">Download Demo Video</a>
</p>
### Features ###
There are 3 snippets which are activated by the following tab triggers:
* __mcdt__: Migration Create and Drop Table
* __marc__: Migration Add and Remove Column
* __mnc__: Migration Rename Column
### Installation ###
Run **Quick Install.app** to install these commands to your <a [syncPeople on Rails bundle](syncPeople) if it exists, and to the default Rails bundle otherwise. (I highly recommend you get the syncPeople bundle if you haven't already.)
<p style="text-align: center">
<img src="/images/download.png" title="Download" alt="Download">
<a href="/f/IntelligentMigrationSnippets-0.1.dmg">Download Intelligent Migration Snippets</a>
</p>
This is specific to Rails migrations, but there are probably other uses for something like this. You are free to use and distribute this code.
[syncPeople]: http://blog.inquirylabs.com/

View file

@ -1,10 +0,0 @@
---
Title: "Jump to view/controller in TextMate"
Author: Sami Samhuri
Date: "18th February, 2006"
Timestamp: 2006-02-18T14:51:00-08:00
Tags: hacking, rails, textmate, rails, textmate
---
<a href="http://blog.inquirylabs.com/2006/02/17/controller-to-view-and-back-again-in-textmate/trackback/">Duane</a> came up with a way to jump to the controller method for the view you're editing, or vice versa in TextMate while coding using Rails. This is a huge time-saver, thanks!

View file

@ -1,172 +0,0 @@
---
Title: "Obligatory Post about Ruby on Rails"
Author: Sami Samhuri
Date: "20th February, 2006"
Timestamp: 2006-02-20T00:31:00-08:00
Tags: rails, coding, hacking, migration, rails, testing
---
<p><em>I'm a Rails newbie and eager to learn. I welcome any suggestions or criticism you have. You can direct them to <a href="mailto:sjs@uvic.ca">my inbox</a> or leave me a comment below.</em></p>
<p>I finally set myself up with a blog. I mailed my dad the address and mentioned that it was running <a href="http://www.typosphere.org/">Typo</a>, which is written in <a href="http://www.rubyonrails.com/">Ruby on Rails</a>. The fact that it is written in Rails was a big factor in my decision. I am currently reading <a href="http://www.pragmaticprogrammer.com/titles/rails/">Agile Web Development With Rails</a> and it will be great to use Typo as a learning tool, since I will be modifying my blog anyways regardless of what language it's written in.</p>
<p>Clearly Rails made an impression on me somehow or I wouldn't be investing this time on it. But my dad asked me a very good question:</p>
> Rails? What is so special about it? I looked at your page and it looks pretty normal to me. I miss the point of this new Rails technique for web development.
<p>It's unlikely that he was surprised at my lengthy response, but I was. I have been known to write him long messages on topics that interest me. However, I've only been learning Rails for two weeks or so. Could I possibly have so much to say about it already? Apparently I do.</p><h2>Ruby on Rails background</h2>
<p>I assume a pretty basic knowledge of what Rails is, so if you're not familiar with it now's a good time to read something on the official <a href="http://www.rubyonrails.com/">Rails website</a> and watch the infamous <a href="http://www.rubyonrails.com/screencasts">15-minute screencast</a>, where Rails creator, <a href="http://www.loudthinking.com/">David Heinemeier Hansson</a>, creates a simple blog application.</p>
<p>The screencasts are what sparked my curiosity, but they hardly scratch the surface of Rails. After that I spent hours reading whatever I could find about Rails before deciding to take the time to learn it well. As a result, a lot of what you read here will sound familiar if you've read other blogs and articles about Rails. This post wasn't planned so there's no list of references yet. I hope to add some links though so please contact me if any ideas or paraphrasing here is from your site, or if you know who I should give credit to.</p>
<h2>Rails through my eyes</h2>
<p>Rails is like my Black &amp; Decker toolkit. I have a hammer, power screwdriver, tape measure, needle-nose pliers, wire cutters, a level, etc. This is exactly what I need—no more, no less. It helps me get things done quickly and easily that would otherwise be painful and somewhat difficult. I can pick up the tools and use them without much training. Therefore I am instantly productive with them.</p>
<p>The kit is suitable for many people who need these things at home, such as myself. Companies build skyscrapers and huge malls and apartments, and they clearly need more powerful tools than I. There are others that just need to drive in a nail to hang a picture, in which case the kit I have is overkill. They're better off just buying and using a single hammer. I happen to fall in the big grey middle <a href="http://web.archive.org/web/20070316171839/http://poignantguide.net/ruby/chapter-3.html#section2">chunk</a>, not the other two.</p>
<p>I'm a university student. I code because it's satisfying and fun to create software. I do plan on coding for a living when I graduate. I don't work with ancient databases, or create monster sites like Amazon, Google, or Ebay. The last time I started coding a website from scratch I was using <a href="http://www.php.net/">PHP</a>, that was around the turn of the millennium. [It was a fan site for a <a href="http://www.nofx.org/">favourite band</a> of mine.]</p>
<p>After a year or so I realized I didn't have the time to do it properly (ie. securely and cleanly) if I wanted it to be done relatively soon. A slightly customized <a href="http://www.mediawiki.org/wiki/MediaWiki">MediaWiki</a> promptly took it's place. It did all that I needed quite well, just in a less specific way.</p>
<p>The wiki is serving my site extremely well, but there's still that itch to create my <strong>own</strong> site. I feel if Rails was around back then I may have been able to complete the project in a timely manner. I was also frustrated with PHP. Part of that is likely due to a lack of experience and of formal programming education at that time, but it was still not fun for me. It wasn't until I started learning Rails that I thought "<em>hey, I could create that site pretty quickly using this!</em>"</p>
<p>Rails fits my needs like a glove, and this is where it shines. Many professionals are making money creating sites in Rails, so I'm not trying to say it's for amateurs only or something equally silly.</p>
<h2>Web Frameworks and iPods?</h2>
<p>Some might say I have merely been swept up in hype and am following the herd. You may be right, and that's okay. I'm going to tell you a story. There was a guy who didn't get one of the oh-so-shiny iPods for a long time, though they looked neat. His discman plays mp3 CDs, and that was good enough for him. The latest iPod, which plays video, was sufficiently cool enough for him to forget that <strong>everyone</strong> at his school has an iPod and he would be trendy just like them now.</p>
<p>Shocker ending: he is I, and I am him. Now I know why everyone has one of those shiny devices. iPods and web frameworks have little in common except that many believe both the iPod and Rails are all hype and flash. I've realized that something creating this kind of buzz may actually just be a good product. I feel that this is the only other thing the iPod and Rails have in common: they are both <strong>damn good</strong>. Enough about the iPod, everyone hates hearing about it. My goal is to write about the other thing everyone is tired of hearing about.</p>
<h2>Why is Rails special?</h2>
<p><strong>Rails is not magic.</strong> There are no exclusive JavaScript libraries or HTML tags. We all have to produce pages that render in the same web browsers. My dad was correct, there <em>is</em> nothing special about my website either. It's more or less a stock Typo website.</p>
<p>So what makes developing with Rails different? For me there are four big things that set Rails apart from the alternatives:</p>
<ol>
<li>Separating data, function, and design</li>
<li>Readability (which is underrated) </li>
<li>Database migrations</li>
<li>Testing is so easy it hurts</li>
</ol>
<h3>MVC 101 <em>(or, Separating data, function, and design)</em></h3>
<p>Now I'm sure you've heard about separating content from design. Rails takes that one step further from just using CSS to style your website. It uses what's known as the MVC paradigm: <strong>Model-View-Controller</strong>. This is a tried and tested development method. I'd used MVC before in Cocoa programming on Mac OS X, so I was already sold on this point.</p>
<ul>
<li>The model deals with your data. If you're creating an online store you have a product model, a shopping cart model, a customer model, etc. The model takes care of storing this data in the database (persistence), and presenting it to you as an object you can manipulate at runtime.</li>
</ul>
<ul>
<li>The view deals <em>only</em> with presentation. That's it, honestly. An interface to your app.</li>
</ul>
<ul>
<li>The controller binds the model to the view, so that when the user clicks on the <strong>Add to cart</strong> link the controller is wired to call the <code>add_product</code> method of the cart model and tell it which product to add. Then the controller takes the appropriate action such as redirecting the user to the shopping cart view.</li>
</ul>
<p>Of course this is not exclusive to Rails, but it's an integral part of it's design.</p>
<h3>Readability</h3>
<p>Rails, and <a href="http://www.ruby-lang.org/">Ruby</a>, both read amazingly like spoken English. This code is more or less straight out of Typo. You define relationships between objects like this:</p>
```ruby
class Article < Content
has_many :comments, :dependent => true, :order => "created_at ASC"
has_many :trackbacks, :dependent => true, :order => "created_at ASC"
has_and_belongs_to_many :categories, :foreign_key => 'article_id'
has_and_belongs_to_many :tags, :foreign_key => 'article_id'
belongs_to :user
...
```
<p><code>dependent =&gt; true</code> means <em>if an article is deleted, it's comments go with it</em>. Don't worry if you don't understand it all, this is just for you to see some actual Rails code.</p>
<p>In the Comment model you have:</p>
```ruby
class Comment < Content
belongs_to :article
belongs_to :user
validates_presence_of :author, :body
validates_against_spamdb :body, :url, :ip
validates_age_of :article_id
...
```
<p>(I snuck in some validations as well)</p>
<p>But look how it reads! Read it out loud. I'd bet that my mom would more or less follow this, and she's anything but a programmer. That's not to say programming should be easy for grandma, <strong>but code should be easily understood by humans</strong>. Let the computer understand things that are natural for me to type, since we're making it understand a common language anyways.</p>
<p>Ruby and Ruby on Rails allow and encourage you to write beautiful code. That is so much more important than you may realize, because it leads to many other virtues. Readability is obvious, and hence maintainability. You must read code to understand and modify it. Oh, and happy programmers will be more productive than frustrated programmers.</p>
<h3 id="migrations">Database Migrations</h3>
<p>Here's one more life-saver: migrations. Migrations are a way to version your database schema from within Rails. So you have a table, call it <code>albums</code>, and you want to add the date the album was released. You could modify the database directly, but that's not fun. Even if you only have one server, all your configuration will be in one central place, the app. And Rails doesn't care if you have PostgreSQL, MySQL, or SQLite behind it. You can develop and test on SQLite and deploy on MySQL and the migrations will just work in both environments.</p>
```ruby
class AddDateReleased < ActiveRecord::Migration
def self.up
add_column "albums", "date_released", :datetime
Albums.update_all "date_released = now()"
end
def self.down
remove_column "albums", "date_released"
end
end
```
<p>Then you run the migration (<code>rake migrate</code> does that) and boom, your up to date. If you're wondering, the <code>self.down</code> method indeed implies that you can take this the other direction as well. Think <code>rake migrate VERSION=X</code>.</p>
<p><em>Along with the other screencasts is one on <a href="http://www.rubyonrails.org/screencasts">migrations</a> featuring none other than David Hansson. You should take a look, it's the third video.</em></p>
<h3>Testing so easy it hurts</h3>
<p>To start a rails project you type <code>rails project_name</code> and it creates a directory structure with a fresh project in it. This includes a directory appropriately called <em>test</em> which houses unit tests for the project. When you generate models and controllers it creates test stubs for you in that directory. Basically, it makes it so easy to test that you're a fool not to do it. As someone wrote on their site: <em>It means never having to say "<strong>I introduced a new bug while fixing another.</strong>"</em></p>
<p>Rails builds on the unit testing that comes with Ruby. On a larger scale, that means that Rails is unlikely to flop on you because it is regularly tested using the same method. Ruby is unlikely to flop for the same reason. That makes me look good as a programmer. If you code for a living then it's of even more value to you.</p>
<p><em>I don't know why it hurts. Maybe it hurts developers working with other frameworks or languages to see us have it so nice and easy.</em></p>
<h2>Wrapping up</h2>
<p>Rails means I have fun doing web development instead of being frustrated (CSS hacks aside). David Hansson may be right when he said you have to have been soured by Java or PHP to fully appreciate Rails, but that doesn't mean you won't enjoy it if you <em>do</em> like Java or PHP.</p>
<p><a href="http://www.relevancellc.com/blogs/wp-trackback.php?p=31">Justin Gehtland</a> rewrote a Java app using Rails and the number of lines of code of the Rails version was very close to that of the XML configuration for the Java version. Java has strengths, libraries available <strong>now</strong> seems to be a big one, but it's too big for my needs. If you're like me then maybe you'll enjoy Rails as much as I do.</p>
<h2>You're not done, you lied to me!</h2>
<p>Sort of... there are a few things that it seems standard to include when someone writes about how Rails saved their life and gave them hope again. For completeness sake, I feel compelled to mention some principles common amongst those who develop Rails, and those who develop on Rails. It's entirely likely that there's nothing new for you here unless you're new to Rails or to programming, in which case I encourage you to read on.</p>
<h3>DRY</h3>
<p>Rails follows the DRY principle religiously. That is, <strong>Don't Repeat Yourself</strong>. Like MVC, I was already sold on this. I had previously encountered it in <a href="http://www.pragmaticprogrammer.com/ppbook/index.shtml">The Pragmatic Programmer</a>. Apart from telling <em>some_model</em> it <code>belongs_to :other_model</code> and <em>other_model</em> that it <code>has_many :some_models</code> nothing has jumped out at me which violates this principle. However, I feel that reading a model's code and seeing it's relationships to other models right there is a Good Thing™.</p>
<h3>Convention over configuration <em>(or, Perceived intelligence)</em></h3>
<p>Rails' developers also have the mantra "<em>convention over configuration</em>", which you can see from the video there. (you did watch it, didn't you? ;) Basically that just means Rails has sane defaults, but is still flexible if you don't like the defaults. You don't have to write even one line of SQL with Rails, but if you need greater control then you <em>can</em> write your own SQL. A standard cliché: <em>it makes the simple things easy and the hard possible</em>.</p>
<p>Rails seems to have a level of intelligence which contributes to the wow-factor. After <a href="#migrations">these relationships</a> are defined I can now filter certain negative comments like so:</p>
```ruby
article = Article.find :first
for comment in article.comments do
print comment unless comment.downcase == 'you suck!'
end
```
<p>Rails knows to look for the field <strong>article_id</strong> in the <strong>comments</strong> table of the database. This is just a convention. You can call it something else but then you have to tell Rails what you like to call it.</p>
<p>Rails understands pluralization, which is a detail but it makes everything feel more natural. If you have a <strong>Person</strong> model then it will know to look for the table named <strong>people</strong>.</p>
<h3>Code as you learn</h3>
<p>I love how I've only been coding in Rails for a week or two and I can do so much already. It's natural, concise and takes care of the inane details. I love how I <em>know</em> that I don't even have to explain that migration example. It's plainly clear what it does to the database. It doesn't take long to get the basics down and once you do it goes <strong>fast</strong>.</p>

View file

@ -1,34 +0,0 @@
---
Title: "SJ's Rails Bundle 0.2 for TextMate"
Author: Sami Samhuri
Date: "23rd February, 2006"
Timestamp: 2006-02-23T17:18:00-08:00
Tags: textmate, rails, coding, bundle, macros, rails, snippets, textmate
---
Everything that you've seen posted on my blog is now available in one bundle. Snippets for Rails database migrations and assertions are all included in this bundle.
There are 2 macros for class-end and def-end blocks, bound to <strong>⌃C</strong> and <strong>⌃D</strong> respectively. Type the class or method definition, except for <code>class</code> or <code>def</code>, and then type the keyboard shortcut and the rest is filled in for you.
I use an underscore to denote the position of the cursor in the following example:
```ruby
method(arg1, arg2_)
```
Typing <strong>⌃D</strong> at this point results in this code:
```ruby
def method(arg1, arg2)
_
end
```
There is a list of the snippets in Features.rtf, which is included in the disk image. Of course you can also browse them in the Snippets Editor built into TextMate.
Without further ado, here is the bundle:
<p style="text-align: center;"><img src="/images/download.png" title="Download" alt="Download"> <a href="/f/SJRailsBundle-0.2.dmg">Download SJ's Rails Bundle 0.2</a></p>
This is a work in progress, so any feedback you have is very helpful in making the next release better.

View file

@ -1,107 +0,0 @@
---
Title: "Some TextMate snippets for Rails Migrations"
Author: Sami Samhuri
Date: "18th February, 2006"
Timestamp: 2006-02-18T22:48:00-08:00
Tags: textmate, rails, hacking, rails, snippets, textmate
---
My arsenal of snippets and macros in TextMate is building as I read through the rails canon, <a href="http://www.pragmaticprogrammer.com/titles/rails/" title="Agile Web Development With Rails">Agile Web Development...</a> I'm only 150 pages in so I haven't had to add much so far because I started with the bundle found on the <a href="http://wiki.rubyonrails.org/rails/pages/TextMate">rails wiki</a>. The main ones so far are for migrations.
Initially I wrote a snippet for adding a table and one for dropping a table, but I don't want to write it twice every time! If I'm adding a table in **up** then I probably want to drop it in **down**.
What I did was create one snippet that writes both lines, then it's just a matter of cut & paste to get it in **down**. The drop_table line should be inserted in the correct method, but that doesn't seem possible. I hope I'm wrong!
Scope should be *source.ruby.rails* and the triggers I use are above the snippets.
mcdt: **M**igration **C**reate and **D**rop **T**able
```ruby
create_table "${1:table}" do |t|
$0
end
${2:drop_table "$1"}
```
mcc: **M**igration **C**reate **C**olumn
```ruby
t.column "${1:title}", :${2:string}
```
marc: **M**igration **A**dd and **R**emove **C**olumn
```ruby
add_column "${1:table}", "${2:column}", :${3:string}
${4:remove_column "$1", "$2"}
```
I realize this might not be for everyone, so here are my original 4 snippets that do the work of *marc* and *mcdt*.
mct: **M**igration **C**reate **T**able
```ruby
create_table "${1:table}" do |t|
$0
end
```
mdt: **M**igration **D**rop **T**able
```ruby
drop_table "${1:table}"
```
mac: **M**igration **A**dd **C**olumn
```ruby
add_column "${1:table}", "${2:column}", :${3:string}
```
mrc: **M**igration **R**remove **C**olumn
```ruby
remove_column "${1:table}", "${2:column}"
```
I'll be adding more snippets and macros. There should be a central place where the rails bundle can be improved and extended. Maybe there is...
----
#### Comments
<div id="comment-1" class="comment">
<div class="name">
<a href="http://blog.inquirylabs.com/">Duane Johnson</a>
</div>
<span class="date" title="2006-02-19 06:48:00 -0800">Feb 19, 2006</span>
<div class="body">
<p>This looks great! I agree, we should have some sort of central place for these things, and
preferably something that's not under the management of the core Rails team as they have too
much to worry about already.</p>
<p>Would you mind if I steal your snippets and put them in the syncPeople on Rails bundle?</p>
</div>
</div>
<div id="comment-2" class="comment">
<div class="name">
<a href="https://samhuri.net">Sami Samhuri</a>
</div>
<span class="date" title="2006-02-19 18:48:00 -0800">Feb 19, 2006</span>
<div class="body">
<p>Not at all. I'm excited about this bundle you've got. Keep up the great work.</p>
</div>
</div>
<div id="comment-3" class="comment">
<div class="name">
<a href="http://blog.inquirylabs.com/">Duane Johnson</a>
</div>
<span class="date" title="2006-02-20 02:48:00 -0800">Feb 20, 2006</span>
<div class="body">
<p>Just added the snippets, Sami. I'll try to make a release tonight. Great work, and keep it coming!</p>
<p>P.S. I tried several ways to get the combo-snippets to put the pieces inside the right functions but failed. We'll see tomorrow if Allan (creator of TextMate) has any ideas.</p>
</div>
</div>

View file

@ -1,58 +0,0 @@
---
Title: "TextMate: Insert text into self.down"
Author: Sami Samhuri
Date: "21st February, 2006"
Timestamp: 2006-02-21T14:55:00-08:00
Tags: textmate, rails, hacking, commands, macro, rails, snippets, textmate
---
<p><em><strong>UPDATE:</strong> I got everything working and it's all packaged up <a href="/posts/2006/02/intelligent-migration-snippets-0_1-for-textmate">here</a>. There's an installation script this time as well.</em></p>
<p>Thanks to <a href="http://thread.gmane.org/gmane.editors.textmate.general/8520">a helpful thread</a> on the TextMate mailing list I have the beginning of a solution to insert text at 2 (or more) locations in a file.</p>
<p>I implemented this for a new snippet I was working on for migrations, <code>rename_column</code>. Since the command is the same in self.up and self.down simply doing a reverse search for <code>rename_column</code> in my <a href="/posts/2006/02/textmate-move-selection-to-self-down">hackish macro</a> didn't return the cursor the desired location.</p><p>That's enough introduction, here's the program to do the insertion:</p>
```ruby
#!/usr/bin/env ruby
def indent(s)
s =~ /^(\s*)/
' ' * $1.length
end
up_line = 'rename_column "${1:table}", "${2:column}", "${3:new_name}"$0'
down_line = "rename_column \"$$1\", \"$$3\", \"$$2\"\n"
# find the end of self.down and insert 2nd line
lines = STDIN.read.to_a.reverse
ends_seen = 0
lines.each_with_index do |line, i|
ends_seen += 1 if line =~ /^\s*end\b/
if ends_seen == 2
lines[i..i] = [lines[i], indent(lines[i]) * 2 + down_line]
break
end
end
# return the new text, escaping special chars
print up_line + lines.reverse.to_s.gsub(/([$`\\])/, '\\\\\1').gsub(/\$\$/, '$')
```
<p>Save this as a command in your Rails, or <a href="http://blog.inquirylabs.com/">syncPeople on Rails</a>, bundle. The command options should be as follows:</p>
<ul>
<li><strong>Save:</strong> Nothing</li>
<li><strong>Input:</strong> Selected Text or Nothing</li>
<li><strong>Output:</strong> Insert as Snippet</li>
<li><strong>Activation:</strong> Whatever you want, I'm going to use a macro described below and leave this empty</li>
<li><strong>Scope Selector:</strong> source.ruby.rails</li>
</ul>
<p>The first modification it needs is to get the lines to insert as command line arguments so we can use it for other snippets. Secondly, regardless of the <strong>Re-indent pasted text</strong> setting the text returned is indented incorrectly.</p>
The macro I'm thinking of to invoke this is tab-triggered and will simply:
<ul>
<li>Select word (<code><strong>⌃W</strong></code>)</li>
<li>Delete (<code><strong></strong></code>)</li>
<li>Select to end of file (<code><strong>⇧⌘↓</strong></code>)</li>
<li>Run command "Put in self.down"</li>
</ul>

View file

@ -1,29 +0,0 @@
---
Title: "TextMate: Move selection to self.down"
Author: Sami Samhuri
Date: "21st February, 2006"
Timestamp: 2006-02-21T00:26:00-08:00
Tags: textmate, rails, hacking, hack, macro, rails, textmate
---
<p><strong>UPDATE:</strong> <em>This is obsolete, see <a href="/posts/2006/02/textmate-insert-text-into-self-down">this post</a> for a better solution.</em></p>
<p><a href="/posts/2006/02/some-textmate-snippets-for-rails-migrations.html#comment-3">Duane's comment</a> prompted me to think about how to get the <code>drop_table</code> and <code>remove_column</code> lines inserted in the right place. I don't think TextMate's snippets are built to do this sort of text manipulation. It would be nicer, but a quick hack will suffice for now.</p><p>Use <acronym title="Migration Create and Drop Table">MCDT</acronym> to insert:</p>
```ruby
create_table "table" do |t|
end
drop_table "table"
```
<p>Then press tab once more after typing the table name to select the code <code>drop_table "table"</code>. I created a macro that cuts the selected text, finds <code>def self.down</code> and pastes the line there. Then it searches for the previous occurence of <code>create_table</code> and moves the cursor to the next line, ready for you to add some columns.</p>
<p>I have this bound to <strong>⌃⌥⌘M</strong> because it wasn't in use. If your Control key is to the left the A key it's quite comfortable to hit this combo. Copy the following file into <strong>~/Library/Application Support/TextMate/Bundles/Rails.tmbundle/Macros</strong>.</p>
<p style="text-align: center;"><a href="http://sami.samhuri.net/files/move-to-self.down.plist">Move selection to self.down</a></p>
<p>This works for the <acronym title="Migration Add and Remove Column">MARC</acronym> snippet as well. I didn't tell you the whole truth, the macro actually finds the previous occurence of <code>(create_table|add_column)</code>.</p>
<p>The caveat here is that if there is a <code>create_table</code> or <code>add_column</code> between <code>self.down</code> and the table you just added, it will jump back to the wrong spot. It's still faster than doing it all manually, but should be improved. If you use these exclusively, the order they occur in <code>self.down</code> will be opposite of that in <code>self.up</code>. That means either leaving things backwards or doing the re-ordering manually. =/</p>

View file

@ -1,18 +0,0 @@
---
Title: "TextMate Snippets for Rails Assertions"
Author: Sami Samhuri
Date: "20th February, 2006"
Timestamp: 2006-02-20T23:52:00-08:00
Tags: textmate, rails, coding, rails, snippets, testing, textmate
---
This time I've got a few snippets for assertions. Using these to type up your tests quickly, and then hitting **⌘R** to run the tests without leaving TextMate, makes testing your Rails app that much more convenient. Just when you thought it was already too easy! (Don't forget that you can use **⌥⌘↓** to move between your code and the corresponding test case.)
This time I'm posting the .plist files to make it easier for you to add them to TextMate. All you need to do is copy these to **~/Library/Application Support/TextMate/Bundles/Rails.tmbundle/Snippets**.
<p style="text-align: center;"><a href="/f/assert_snippets.zip">Assertion Snippets for Rails</a></p>
If anyone would rather I list them all here I can do that as well. Just leave a comment.
*(I wanted to include a droplet in the zip file that will copy the snippets to the right place, but my 3-hour attempt at writing the AppleScript to do so left me feeling quite bitter. Maybe I was just mistaken in thinking it would be easy to pick up AppleScript.)*

Some files were not shown because too many files have changed in this diff Show more