A => HarfBuzz.png +0 -0
A => JuicyPixels.png +0 -0
A => _layouts/page.html +14 -0
@@ 1,14 @@
+---
+---
+<!DOCTYPE html>
+<html>
+<head>
+ <meta charset=utf-8 />
+ <meta name="viewport" content="width=device-width, initial-scale=1" />
+ <title>{{ page.title|xml_escape }}</title>
+ <link rel="alternate" type="application/atom+xml" href="blog.feed" />
+</head>
+<body>
+ {{ content }}
+</body>
+</html>
A => _layouts/post.html +20 -0
@@ 1,20 @@
+---
+---
+<!DOCTYPE html>
+<html>
+<head>
+ <meta charset=utf-8 />
+ <meta name="viewport" content="width=device-width, initial-scale=1" />
+ <title>{{ page.title|xml_escape }} — Argonaut Constellation's blog</title>
+ <link rel="stylesheet" href="https://cdn.simplecss.org/simple.css" />
+ <link rel="alternate" type="application/atom+xml" href="blog.feed" />
+</head>
+<body>
+ <h1>{{ page.title|xml_escape }}</h1>
+ <aside>By {{ page.author }}
+ on <time>{{ page.date | date: "%b %-d, %Y" }}</time>
+ for <a href="/">Rhapsode</a>'s <a href="/blog" title="Rhapsode's Blog">Blog</a></aside>
+
+ {{ content }}
+</body>
+</html>
A => _posts/2020-10-31-why-auditory.md +100 -0
@@ 1,100 @@
+---
+layout: post
+title: Why an Auditory Browser?
+author: Adrian Cochrane
+date: 2020-10-31 20:38:51 +1300
+---
+
+I thought I might start a blog to discuss how and why Rhapsode works the way it does.
+And what better place to start than "why is Rhapsode an *auditory* web browser?"
+
+## It's accessable!
+The blind, amongst numerous others, [deserves](http://gameaccessibilityguidelines.com/why-and-how/) as *excellent* a computing experience as
+the rest of us! Yet webdesigners *far too* often don't consider them, and webdevelopers
+*far too* often [exclude them](https://webaim.org/projects/million/) in favour of visual slickness.
+
+Anyone who can't operate a mouse, keyboard, or touchscreen, anyone who can't see well or
+at all, anyone who can't afford the latest hardware is being
+*excluded* from [our conversations online](https://ferd.ca/you-reap-what-you-code.html).
+*[A crossfade](https://adactio.com/journal/17573) is not worth this loss*!
+
+Currently the blind are [reliant](https://bighack.org/5-most-annoying-website-features-i-face-as-a-blind-screen-reader-user-accessibility/)
+on "screenreaders" to describe the webpages, and applications, they're interacting with.
+Screenreaders in turn rely on webpages to inform it of the semantics being communicated
+visually, which they rarely do.
+
+But *even if* those semantics were communicated, screenreaders would *still* offer a poor
+experience! As they retrofit an auditory output upon an inherantly visual experience.
+
+## It's cool!
+It's unfortunately [not](https://webaim.org/projects/million/) considered cool to show
+disabled people the *dignity* they deserve.
+
+But you know what is considered cool?
+[Voice assistants](https://marketingland.com/more-than-200-million-smart-speakers-have-been-sold-why-arent-they-a-marketing-channel-276012)!
+Or at least that's what Silicon Valley wants us to believe as they sell us
+[Siri](https://www.apple.com/siri/), [Cortana](https://www.microsoft.com/en-us/cortana/),
+[Alexa](https://en.wikipedia.org/wiki/Amazon_Alexa), and other
+[privacy-invasive](https://www.theguardian.com/technology/2019/oct/09/alexa-are-you-invading-my-privacy-the-dark-side-of-our-voice-assistants)
+cloud-centric services.
+
+Guess what? These feminine voices [are accessable](https://vimeo.com/event/540113#t=2975s) to many people otherwise excluded from
+modern computing! Maybe voice assistants can make web accessability cool? Maybe I can
+deliver an alternative web experience people will *want* to use even if they don't need to?
+
+## It's different!
+On a visual display you can show multiple items onscreen at the same time for your eyes
+to choose where to focus their attention moment-to-moment. You can even update those items
+live without confusing anyone!
+
+In contrast in auditory communication, information is positioned in time rather than space.
+Whilst what you say (or type) is limited by your memory rather than screen real estate.
+
+Visual and auditory user experiences are two
+[totally different](https://developer.amazon.com/en-US/docs/alexa/alexa-design/get-started.html)
+beasts, and that makes developing a voice assistant platform interesting!
+
+## It works!
+Webpages in general are still mostly text. Text can be rendered to audio output
+just as (if not more) readily as it can be rendered to visual output. HTML markup
+can be naturally communicated via tone-of-voice. And links can become voice
+commands! A natural match!
+
+Yes, this totally breaks down in the presence of JavaScript with it's device-centric
+input events and ability to output anything whenever, wherever it wants. But I'll
+never be able to catch up in terms of JavaScript support, even if I didn't have
+grave concerns about it!
+
+In practice I find that [most websites](https://hankchizljaw.com/wrote/the-(extremely)-loud-minority/)
+work perfectly fine without JavaScript, it's mainly just the *popular* ones which don't.
+
+## It's simple!
+You may be surprised to learn it's actually *simpler* for me to start my browser
+developments with an auditory offering like Rhapsode! This is because laying out
+text on a one-dimensional timeline is trivial, whilst laying it out in 2-dimensional
+space absolutely isn't. Especially when considering the needs of languages other
+than English!
+
+Once downloaded (along with it's CSS and sound effects), rendering a webpage
+essentially just takes applying a specially-designed [CSS](https://hankchizljaw.com/wrote/css-doesnt-suck/)
+stylesheet! This yields data that can be almost directly passed to basically any
+text-to-speech engine like [eSpeak NG](http://espeak.sourceforge.net/).
+
+Whilst input, whether from the keyboard or a speech-to-text engine like [CMU Sphinx](https://cmusphinx.github.io/),
+is handled through string comparisons against links extracted from the webpage.
+
+## It's efficient!
+I could discuss how the efficiency gained from the afforementioned simplicity is
+important because CPUs are no longer getting any faster, only gaining more cores.
+But that would imply that it was a valid strategy to wait for the latest hardware
+rather than invest time in optimization.
+
+Because performant software is [good for the environment](https://tomgamon.com/posts/is-it-morally-wrong-to-write-inefficient-code/)!
+
+Not only because speed *loosely*
+[correlates](https://thenewstack.io/which-programming-languages-use-the-least-electricity/)
+with energy efficiency, but also because if our slow software pushes others to
+buy new hardware (which again, they might not be able to afford) manufacture that
+new computer incurs
+[significant](https://solar.lowtechmagazine.com/2009/06/embodied-energy-of-digital-technology.html)
+environmental cost.
A => _posts/2020-11-12-css.md +209 -0
@@ 1,209 @@
+---
+layout: post
+title: How Does CSS Work?
+author: Adrian Cochrane
+date: 2020-11-12 20:35:06 +1300
+---
+
+Rendering a webpage in Rhapsode takes little more than applying a
+[useragent stylesheet](https://meiert.com/en/blog/user-agent-style-sheets/)
+to decide how the page's semantics should be communicated.
+[In addition to](https://www.w3.org/TR/CSS2/cascade.html#cascade) any installed
+userstyles and *optionally* author styles.
+
+Once the [CSS](https://www.w3.org/Style/CSS/Overview.en.html) has been applied
+Rhapsode sends the styled text to [eSpeak NG](https://github.com/espeak-ng/espeak-ng)
+to be converted into the sounds you hear. So *how* does Rhapsode apply that CSS?
+
+## Parsing
+[Parser](http://parsingintro.sourceforge.net/) implementations differ mainly in
+*what* they implement rather than *how*. They repeatedly look at the next character(s)
+in the input stream to decide how to represent it in-RAM. Often there'll be a
+"lexing" step (for which I use [Haskell CSS Syntax](https://hackage.haskell.org/package/css-syntax))
+to categorize consecutive characters into "tokens", thereby simplifying the main parser.
+
+My choice to use [Haskell](https://www.haskell.org/), however, does change things
+a little. In Haskell there can be [*no side effects*](https://mmhaskell.com/blog/2017/1/9/immutability-is-awesome);
+all [outputs **must** be returned](https://mmhaskell.com/blog/2018/1/8/immutability-the-less-things-change-the-more-you-know).
+So in addition to the parsed tree, each part of the parser must return the rest
+of text that still needs to be parsed by another sub-parser. Yielding a type
+signature of [`:: [Token] -> (a, [Token])`](https://git.adrian.geek.nz/haskell-stylist.git/tree/src/Data/CSS/Syntax/StylishUtil.hs#n11),
+leading Haskell to allow you to combine these subparsers together in what's
+called "[parser combinators](https://remusao.github.io/posts/whats-in-a-parser-combinator.html)".
+
+Once each style rule is parsed, a method is called on a
+[`StyleSheet`](https://git.adrian.geek.nz/haskell-stylist.git/tree/src/Data/CSS/Syntax/StyleSheet.hs#n27)
+"[typeclass](http://book.realworldhaskell.org/read/using-typeclasses.html)"
+to return a modified datastructure containing the new rule. And a different method
+is called to parse any [at-rules](https://www.w3.org/TR/CSS2/syndata.html#at-rules).
+
+## Pseudoclasses
+Many of my `StyleSheet` implementations handle only certain aspects of CSS,
+handing off to another implementation to perform the rest.
+
+For example most pseudoclasses (ignoring interactive aspects I have no plans to
+implement) can be re-written into simpler selectors. So I added a configurable
+`StyleSheet` [decorator](https://refactoring.guru/design-patterns/decorator) just
+to do that!
+
+This pass also resolves any [namespaces](https://www.w3.org/TR/css3-namespace/),
+and corrects [`:before` & `:after`](https://www.w3.org/TR/CSS2/selector.html#before-and-after)
+to be parsed as pseudoelements.
+
+## Media Queries & `@import`
+CSS defines a handful of at-rules which can control whether contained style rules
+will be applied:
+
+* [`@document`](https://developer.mozilla.org/en-US/docs/Web/CSS/@document) allows user & useragent stylesheets to apply style rules only for certain (X)HTML documents & URLs. An interesting Rhapsode-specific feature is [`@document unstyled`](https://git.adrian.geek.nz/haskell-stylist.git/tree/src/Data/CSS/Preprocessor/Conditions.hs#n84) which applies only if no author styles have already been parsed.
+* [`@media`](https://drafts.csswg.org/css-conditional-3/#at-media) applies it's style rules only if the given media query evaluates to true. Whilst in Rhapsode only the [`speech`](https://www.w3.org/TR/CSS2/media.html#media-types) or `-rhapsode` mediatypes are supported, I've implemented a full caller-extensible [Shunting Yard](https://en.wikipedia.org/wiki/Shunting-yard_algorithm) interpretor.
+* [`@import`](https://www.w3.org/TR/css3-cascade/#at-import) fetches & parses the given URL if the given mediatype evaluates to true when you call [`loadImports`](https://git.adrian.geek.nz/haskell-stylist.git/tree/src/Data/CSS/Preprocessor/Conditions.hs#n138). As a privacy protection for future browsers, callers may avoid hardware details leaking to the webserver by being more vague in this pass.
+* [`@supports`](https://drafts.csswg.org/css-conditional-3/#at-supports) applies style rules only if the given CSS property or selector syntax parses successfully.
+
+Since media queries might need to be rechecked when, say, the window has been resized
+`@media` (and downloaded `@import`) are resolved to populate a new `StyleSheet`
+implementation only when the [`resolve`](https://git.adrian.geek.nz/haskell-stylist.git/tree/src/Data/CSS/Preprocessor/Conditions.hs#n151)
+function is called. Though again this is overengineered for Rhapsode's uses as
+instead of window it renders pages to an infinite auditory timeline, media queries
+are *barely* useful here.
+
+## Indexing
+Ultimately Rhapsode parses CSS style rules to be stored in a [hashmap](https://en.wikipedia.org/wiki/Hash_table)
+(or rather a [Hash Array Mapped Trie](https://en.wikipedia.org/wiki/Hash_array_mapped_trie))
+[indexed](https://git.adrian.geek.nz/haskell-stylist.git/tree/src/Data/CSS/Style/Selector/Index.hs#n50)
+under the right-most selector if any. This dramatically cuts down on how
+many style rules have to be considered for each element being styled.
+
+So that for each element needing styling, it [looks up](https://git.adrian.geek.nz/haskell-stylist.git/tree/src/Data/CSS/Style/Selector/Index.hs#n68)
+just those style rules which match it's name, attributes, IDs, and/or classes.
+However this only considers a single test from each rules' selector, so we need a…
+
+## Interpretor
+To truly determine whether an element matches a [CSS selector](https://www.w3.org/TR/selectors-3/),
+we need to actually evaluate that selector! I've implemented this in 3 parts:
+
+* [Lowering](https://git.adrian.geek.nz/haskell-stylist.git/tree/src/Data/CSS/Style/Selector/Interpret.hs#n53) - Reduces how many types of selector tests need to be compiled by e.g. converting `.class` to `[class~=class]`.
+* [Compilation](https://git.adrian.geek.nz/haskell-stylist.git/tree/src/Data/CSS/Style/Selector/Interpret.hs#n34) - Converts the parsed selector into a [lambda](https://teraum.writeas.com/anatomy-of-things) function you can call as the style rule is being added to the store.
+* [Runtime](https://git.adrian.geek.nz/haskell-stylist.git/tree/src/Data/CSS/Style/Selector/Interpret.hs#n88) - Provides functions that may be called as part of evaluating a CSS selector.
+
+Whether there's actually any compilation happening is another question for the
+[Glasgow Haskell Compiler](https://www.haskell.org/ghc/), but regardless I find
+it a convenient way to write and think about it.
+
+Selectors are interpreted from right-to-left as that tend to shortcircuit sooner,
+upon an alternate inversely-linked representation of the element tree parsed by
+[XML Conduit](https://hackage.haskell.org/package/xml-conduit).
+
+**NOTE** In webapp-capable browser engines [`querySelectorAll`](https://developer.mozilla.org/en-US/docs/Web/API/Element/querySelectorAll)
+tends to use a *slightly* different selector interpretor because there we know
+the ancestor element. This makes it more efficient to interpret *those* selectors
+left-to-right.
+
+## Specificity
+Style rules should be sorted by a ["selector specificity"](https://www.w3.org/TR/selectors-3/#specificity),
+which is computed by counting tests on IDs, classes, & tagnames. With ties broken
+by which come first in the source code and whether the stylesheet came from the
+[browser, user, or webpage](https://www.w3.org/TR/CSS2/cascade.html#cascade).
+
+This is implemented as a decorator around the interpretor & (in turn) indexer.
+Another decorator strips [`!important`](https://www.w3.org/TR/CSS2/cascade.html#important-rules)
+off the end of any relevant CSS property values, generating new style rules with
+higher priority.
+
+## Validation
+Once `!important` is stripped off, the embedding application is given a chance
+to validate whether the syntax is valid &, as such, whether it should participate
+in the CSS cascade. Invalid properties are discarded.
+
+At the same time the embedding application can expand CSS
+[shorthands](https://developer.mozilla.org/en-US/docs/Web/CSS/Shorthand_properties)
+into one or more longhand properties. E.g. convert `border-left: thin solid black;`
+into `border-left-width: thin; border-left-style: solid; border-left-color: black;`.
+
+## CSS [Cascade](https://www.w3.org/TR/css3-cascade/)
+This was trivial to implement! Once you have a list of style rules listed by
+specificity, just load all their properties into a
+[hashmap](http://hackage.haskell.org/package/unordered-containers) & back!
+
+Maybe I'll write a little blogpost about how many webdevs seem to be
+[scared of the cascade](https://mxb.dev/blog/the-css-mindset/#h-the-cascade-is-your-friend)…
+
+After cascade, methods are called on a given [`PropertyParser`](https://git.adrian.geek.nz/haskell-stylist.git/tree/src/Data/CSS/Style/Cascade.hs#n18)
+to parse each longhand property into an in-memory representation that's easier
+to process. This typeclass *also* has useful decorators, though few are needed
+for the small handful of speech-related properties.
+
+Haskell's [pattern matching](http://learnyouahaskell.com/syntax-in-functions#pattern-matching)
+syntax makes the tidious work of parsing the
+[sheer variety](https://www.w3.org/TR/CSS2/propidx.html#q24.0) of CSS properties
+absolutely trivial. I didn't have to implement a DSL like other
+[browser engines do](http://trac.webkit.org/browser/webkit/trunk/Source/WebCore/css/CSSProperties.json)!
+This is the reason why I chose Haskell!
+
+## CSS Variables [`var()`](https://www.w3.org/TR/css-variables-1/)
+In CSS3, any property prefixed with [`--`](https://www.w3.org/TR/css-variables-1/#defining-variables)
+will participate in CSS cascade to specify what tokens the `var()` function should
+substitute in. If the property no longer parses successfully after this substitution
+it is ignored. A bit of a [gotcha for webdevs](https://matthiasott.com/notes/css-custom-properties-fail-without-fallback),
+but makes it quite trivial for me to implement!
+
+In fact, beyond prioritizing extraction of `--`-prefixed properties, I needed little
+more than a [trivial](https://git.adrian.geek.nz/haskell-stylist.git/tree/src/Data/CSS/Style.hs#n91)
+`PropertyParser` decorator.
+
+## [Counters](https://www.w3.org/TR/css-counter-styles-3/)
+There's a [handful of CSS properties](https://www.w3.org/TR/CSS2/text.html#q16.0)
+which alters the text parsed from the HTML document, predominantly by including
+counters. Which I use to render [`<ol>`](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/ol)
+elements. Or to generate marker labels for the arrow keys to jump to.
+
+To implement these I added a [`StyleTree`](https://git.adrian.geek.nz/haskell-stylist.git/tree/src/Data/CSS/StyleTree.hs)
+abstraction to hold the relationship between all parsed `PropertyParser` style
+objects & aid tree traversals. From there I implemented a [second](https://git.adrian.geek.nz/haskell-stylist.git/tree/src/Data/CSS/Preprocessor/Text.hs#n31)
+`PropertyParser` decorator with two tree traversals:
+[one](https://git.adrian.geek.nz/haskell-stylist.git/tree/src/Data/CSS/Preprocessor/Text.hs#n179)
+to collapse whitespace & the [other](https://git.adrian.geek.nz/haskell-stylist.git/tree/src/Data/CSS/Preprocessor/Text.hs#n112)
+to track counter values before substituting them (as strings) in-place of any
+[`counter()`](https://www.w3.org/TR/CSS2/generate.html#counter-styles) or
+[`counters()`](https://developer.mozilla.org/en-US/docs/Web/CSS/counters()) functions.
+
+## [`url()`](https://www.w3.org/TR/CSS2/syndata.html#uri)
+In most browser engines any resource references (via the `url()` function, which
+incidentally requires special effort to lex correctly & resolve any relative links)
+is resolved after the page has been fully styled. I opted to do this prior to
+styling instead, as a privacy measure I found just as easy to implement as it
+would be not to do so.
+
+Granted this does lead to impaired functionality of the
+[`style`](https://www.w3.org/TR/html401/present/styles.html#h-14.2.2)
+attribute, but please don't use that anyways!
+
+This was implemented as a pair of `StyleSheet` implementations: one to extract
+relevant URLs from the stylesheet, and the other to substitute in the filepaths
+where they were downloaded. eSpeak NG will parse these
+[`.wav`](http://www-mmsp.ece.mcgill.ca/Documents/AudioFormats/WAVE/WAVE.html)
+files when it's ready to play these sound effects.
+
+## [CSS Inheritance](https://www.w3.org/TR/CSS2/cascade.html#inheritance)
+Future browser engines of mine will handle this differently, but for Rhapsode I
+simply reformat the style tree into a [SSML document](https://www.w3.org/TR/speech-synthesis/)
+to hand to straight to [eSpeak NG](https://adrian.geek.nz/docs/espeak.html).
+
+[eSpeak NG](http://espeak.sourceforge.net/ssml.html) (running in-process) will
+then parse this XML with the aid of a stack to convert it into control codes
+within the text it's later stages will gradually convert to sound.
+
+---
+
+While all this *is* useful to webdevs wanting to give a special feel to their
+webpages (which, within reason, I don't object to), my main incentive to implement
+CSS was for my own sake in designing Rhapsode's
+[useragent stylesheet](https://git.adrian.geek.nz/rhapsode.git/tree/useragent.css).
+And that stylesheet takes advantage of most of the above.
+
+Sure there are features (like support for CSS variables or most pseudoclasses) I
+decided to implement just because they were easy, but the only thing I'd consider
+extra complexity beyond the needs of an auditory browser engine are media queries.
+But I'm sure I'll find a use for those in future browser engines.
+
+Otherwise all this code would have to be in Rhapsode in some form or other to
+give a better auditory experience than eSpeak NG can deliver itself!
A => _posts/2021-01-23-why-html.md +51 -0
@@ 1,51 @@
+---
+layout: post
+title: Why (Mostly-)Standard HTTP/HTML/optional CSS?
+author: Adrian Cochrane
+date: 2021-01-23 15:20:24 +1300
+---
+
+[Modern](https://webkit.org/) [web](https://www.chromium.org/blink) [browsers](https://hg.mozilla.org/mozilla-central/) are massively complex [beasts](https://roytang.net/2020/03/browsers-http/), implementing an [evergrowing mountain](https://drewdevault.com/2020/03/18/Reckless-limitless-scope.html) of [supposedly-open standards](https://html.spec.whatwg.org/multipage/) few can [keep up with](https://css-tricks.com/the-ecological-impact-of-browser-diversity/). So why do I think I *can* do better whilst adhering to the [same standards](https://w3.org/)? Why do I think that's valuable?
+
+## Where's the complexity?
+[XML](https://www.w3.org/TR/2008/REC-xml-20081126/), [XHTML](https://www.w3.org/TR/xhtml1/), & [HTTP](https://tools.ietf.org/html/rfc2616) are all very trivial to parse, with numerous [parsers](https://en.wikipedia.org/wiki/Category:XML_parsers) implemented in just about every programming language. HTML *wouldn't* be much worse if it weren't for WHATWG's [error recovery specification](https://html.spec.whatwg.org/multipage/parsing.html#tree-construction) most webdevs *don't* seem to take advantage of anyways. [CSS](https://www.w3.org/Style/CSS/Overview.en.html) should be *optional* for web browsers to support, and most of the complexity there is better considered an inherent part of rendering international, resizable, formatted [text](https://gankra.github.io/blah/text-hates-you/). Even if your OS is hiding this complexity from applications.
+
+If it's not there, *where* is the complexity? [Richtext layout](https://raphlinus.github.io/text/2020/10/26/text-layout.html) in arbitrarily-sized windows is one answer, which I think is disrespectful to want to [do away with](https://danluu.com/sounds-easy/). But unlike what some browser devs suggest it isn't the full answer.
+
+Expecting [JavaScript](https://262.ecma-international.org/10.0/) to be [fast](https://bellard.org/quickjs/) yet [secure](https://webkit.org/blog/8048/what-spectre-and-meltdown-mean-for-webkit/) so you can [reshape](https://andregarzia.com/2020/03/private-client-side-only-pwas-are-hard-but-now-apple-made-them-impossible.html) a beautiful document publishing system into the application distribution platform you're upset your proprietary OS doesn't deliver *is* a massive source of ([intellectually](https://webkit.org/blog/10308/speculation-in-javascriptcore/)-[stimulating](https://v8.dev/blog/pointer-compression)) complexity. The 1990's-era over-engineered [object-oriented](https://web.archive.org/web/20010429235709/http://www.bluetail.com/~joe/vol1/v1_oo.html) representation of parsed HTML which leaves nobody (including JavaScript optimizers) happy is almost 200,000 lines of code in WebKit, which CSS barely cares about. The [videoconferencing backend](https://www.w3.org/TR/webrtc/) they embed for [Google Hangouts](https://hangouts.google.com/) takes almost as much code as the rest of the browser!
+
+I can back up these claims both qualitatively & quantitatively.
+
+So yes, dropping JavaScript support makes a huge difference! Not worrying about parsing long-invalid HTML correctly makes a difference, not that we shouldn't [recover from errors](https://www.w3.org/2004/04/webapps-cdf-ws/papers/opera.html). Even moving webforms out-of-line from their embedding webpages to simplify the user interaction, to the point they can be accessed via the unusual human input devices that interests me, makes a difference. Whilst stopping webdevs from [complaining](https://css-tricks.com/custom-styling-form-inputs-with-modern-css-features/) about OS-native controls clashing with their designs.
+
+There's lots and lots of feature bloat we can remove from web browsers before we jump ship to something [new](http://gopher.floodgap.com/overbite/relevance.html).
+
+## There's Valuable Writing Online
+To many [the web is now](https://thebaffler.com/latest/surfin-usa-bevins) just a handful of Silicon Valley giants, surrounded by newssites, etc [begging you](https://invidiou.site/watch?v=OFRjZtYs3wY) to accept numerous popups before you can start reading. It's no wonder they want to burn it to the ground!
+
+But beneath all the skyscrapers and commercialization there's already a vast beautiful underbelly of [knowledge](http://www.perseus.tufts.edu/hopper/) and [entertainment](https://decoderringtheatre.com/). Writing that deserves that [deserves to be preserved](http://robinrendle.com/essays/newsletters.html). Pages that, for the most part, works perfectly fine in Rhapsode as validated by manual testing.
+
+It is for this "[longtail](https://longtail.typepad.com/the_long_tail/2008/11/does-the-long-t.html)" I develop Rhapsode. I couldn't care less that I broke the "[fat head](https://facebook.com/)".
+
+## Links To Webapps
+A common argument in favour of jumping ship to, say, [Gemini](https://gemini.circumlunar.space/) (not that I dislike Gemini) is that on the existing web readers are bound to [frequently encounter links](https://gemini.circumlunar.space/docs/faq.html) to, say, JavaScript-reliant sites. I think such arguments underestimate [how few sites](https://hankchizljaw.com/wrote/the-(extremely)-loud-minority/) are actually broken in browsers like [Rhapsode](https://rhapsode.adrian.geek.nz/), [Lynx](https://lynx.browser.org/), & [Dillo](https://www.dillo.org/). This damage is easily repairable with a little automation, which has already been done via content mirrors like [Nitter](https://nitter.net/) & [Invidious](https://invidio.us/).
+
+Rhapsode supports URL [redirection/blocking extensions](https://hackage.haskell.org/package/regex-1.1.0.0/docs/Text-RE-Tools-Edit.html) for this very reason, and I hope that it's novelty leads people to forgive any other brokenness they encounter. Rightfully blaming the website instead.
+
+## Why Not JavaScript?
+To be clear, I do not wish to demean anyone for using JavaScript. There are valid use cases you can't yet achieve any other way, *some* of which has enhanced the document web and we should [find alternative more declarative ways](http://john.ankarstrom.se/replacing-javascript/) to preserve. I always like a [good visualization](https://www.joshworth.com/dev/pixelspace/pixelspace_solarsystem.html)! And there is a need for interactive apps to let people do more with computers than read what others have written.
+
+What I want long term is for JavaScript to leave the document web. For the browsers' feature set (like [payments](https://tools.ietf.org/html/rfc8905) & [videocalls](https://tools.ietf.org/html/rfc3261#section-19.1)) to be [split between more apps](https://www.freedesktop.org/wiki/Distributions/AppStream/). If this means websites & webapps split into their own separate platforms, I'll be happy. Though personally I'd prefer to oneclick-install beautiful [consistently-designed](https://elementary.io/docs/human-interface-guidelines) apps from the [elementary AppCenter](https://appcenter.elementary.io/)! And I want it to be reasonable to audit any software running on my computer.
+
+In part my complaint with JavaScript is that it's where most of the web's recent feature bloat has been landing. But I do think it was a mistake to allow websites to run [arbitrary computation](https://garbados.github.io/my-blog/browsers-are-a-mess.html) on the client. Sure that computation is "[sandboxed](https://web.archive.org/web/20090424010915/http://www.google.com/googlebooks/chrome/small_26.html)" but that sandbox isn't as secure (eBay's been able to determine which ports you have open [citation needed]) as we thought especially given [hardware vulnerabilities](https://spectreattack.com/), it's restrictions [are loosening](https://web.dev/usb/), & there's plenty of antifeatures you can add well within it's bounds. JavaScript [degrades my experience](https://www.wired.com/2015/11/i-turned-off-javascript-for-a-whole-week-and-it-was-glorious/) on the web far more often than it enhances it.
+
+I want [standards](https://www.freedesktop.org/wiki/Specifications/) that give implementers [UI leeway](https://yewtu.be/watch?v=fPFdV-Z69Lo). JavaScript is not that!
+
+Even if I did implement JavaScript in Rhapsode all that would accomplish is raise expectations impossibly high, practically none of those JavaScript-*reliant* websites (where they don't [block me outright](https://www.bleepingcomputer.com/news/google/google-now-bans-some-linux-web-browsers-from-their-services/)) will deliver a decent auditory UX. JavaScript isn't necessary for delivering a [great auditory UX](https://www.smashingmagazine.com/2020/12/making-websites-accessible/), only for repairing the damage from focusing exclusively on sighted readers.
+
+## Why CSS?
+Webdevs harm the [readability of their websites](https://css-tricks.com/reader-mode-the-button-to-beat/) via CSS frequently enough that most browsers offer a button to replace those stylesheets. So why do I want to let them continue?
+
+I don't. I want a working CSS engine for my own sake in designing Rhapsode's auditory experience, and to allow readers to repair broken websites in a familiar language. I think I can expose it to webdevs whilst minimizing the damage they can do, by e.g. not supporting overlays & enforcing minimum text contrast in visual browsers. For Rhapsode I prevent websites from overriding the "dading" used to indicate links you can repeat back for it to follow.
+
+Regardless I believe CSS should be *optional*. Web browsers shouldn't *have to* implement CSS. Websites shouldn't *have to* provide CSS for their pages to be legible on modern monitors. And users must be able to switch stylesheets if the current one doesn't work for them.
A => _posts/2021-06-13-voice2json.md +161 -0
@@ 1,161 @@
+---
+layout: post
+title: Voice Input Supported in Rhapsode 5!
+author: Adrian Cochrane
+date: 2021-06-13T16:10:28+12:00
+---
+Not only can Rhapsode read pages aloud to you via [eSpeak NG](https://github.com/espeak-ng/espeak-ng)
+and it's [own CSS engine](/2020/11/12/css.html), but now you can speak aloud to *it* via
+[Voice2JSON](https://voice2json.org/)! All without trusting or relying upon any
+[internet services](https://www.gnu.org/philosophy/who-does-that-server-really-serve.html),
+except ofcourse for [bogstandard](https://datatracker.ietf.org/doc/html/rfc7230)
+webservers to download your requested information from. Thereby completing my
+[vision](/2020/10/31/why-auditory.html) for Rhapsode's reading experience!
+
+This speech recognition can be triggered either using the <kbd>space</kbd> key or by calling Rhapsode's name
+<span>(Okay, by saying <q>Hey Mycroft</q> because I haven't bothered to train it)</span>.
+
+## Thank you Voice2JSON!
+Voice2JSON is **exactly** what I want from a speech-to-text engine!
+
+Accross it's 4 backends <span>(CMU [PocketSphinx](https://github.com/cmusphinx/pocketsphinx),
+Dan Povey's [Kaldi](https://kaldi-asr.org/), Mozilla [DeepSpeech](https://github.com/mozilla/DeepSpeech),
+& Kyoto University's [Julius](https://github.com/julius-speech/julius))</span> it supports
+*18* human languages! I always like to see more language support, but *this is impressive*.
+
+I can feed it <span>(lightly-preprocessed)</span> whatever random phrases I find in link elements, etc
+to use as voice commands. Even feeding it different commands for every webpage, including
+unusual words.
+
+It operates entirely on your device, only using the internet initially to download
+an appropriate <q>profile</q> for your language.
+
+And when I implement webforms it's <q>slots</q> feature will be **invaluable**.
+
+The only gotcha is that I needed to also add a [JSON parser](https://hackage.haskell.org/package/aeson)
+to Rhapsode's dependencies.
+
+## Mechanics
+To operate Voice2JSON you rerun [`voice2json train-profile`](http://voice2json.org/commands.html#train-profile)
+everytime you edit [`sentences.ini`](http://voice2json.org/sentences.html) or
+any of it's referenced files to update the list of supported voice commands.
+This prepares a <q>language model</q> to guide the output of
+[`voice2json transcribe-stream`](http://voice2json.org/commands.html#transcribe-stream)
+or [`transcribe-wav`](http://voice2json.org/commands.html#transcribe-wav),
+who's output you'll probably pipe into
+[`voice2json recognize-intent`](http://voice2json.org/commands.html#recognize-intent)
+to determine which <q>intent</q> from `sentences.ini` it matches.
+
+If you want this voice recognition to be triggered by some <q>wake word</q>
+run [`voice2json wait-wake`](http://voice2json.org/commands.html#wait-wake)
+to determine when that keyphrase has been said.
+
+### `voice2json train-profile`
+For every page Rhapsode outputs a `sentences.ini` file & runs `voice2json train-profile`
+to compile this mix of [INI](https://www.techopedia.com/definition/24302/ini-file) &
+[Java Speech Grammar Format](https://www.w3.org/TR/jsgf/) syntax into an appropriate
+[NGram](https://blog.xrds.acm.org/2017/10/introduction-n-grams-need/)-based
+<q>language model</q> for the backend chosen by the
+[downloaded profile](https://github.com/synesthesiam/voice2json-profiles).
+
+Once it's parsed `sentences.ini` Voice2JSON optionally normalizes the sentence casing and
+lowers any numeric ranges, <q>slot references</q> from external files or programs, & numeric digits
+via [num2words](https://pypi.org/project/num2words/) before reformatting it into a
+[NetworkX](https://pypi.org/project/networkx/) [graph](https://www.redblobgames.com/pathfinding/grids/graphs.html)
+with weighted edges. This resulting
+[Nondeterministic Finite Automaton](https://www.geeksforgeeks.org/%E2%88%88-nfa-of-regular-language-l-0100-11-and-l-b-ba/) (NFA)
+is [saved](https://docs.python.org/3/library/pickle.html) & [gzip](http://www.gzip.org/)'d
+to the profile before lowering it further to an [OpenFST](http://www.openfst.org/twiki/bin/view/FST/WebHome)
+graph which, with a handful of [opengrm](http://www.opengrm.org/twiki/bin/view/GRM/WebHome) commands,
+is converted into an appropriate language model.
+
+Whilst lowering the NFA to a language model Voice2JSON looks up how to pronounce every unique
+word in that NFA, consulting [Phonetisaurus](https://github.com/AdolfVonKleist/Phonetisaurus)
+for any words the profile doesn't know about. Phonetisaurus in turn evaluates the word over a
+[Hidden Markov](https://www.jigsawacademy.com/blogs/data-science/hidden-markov-model) n-gram model.
+
+### `voice2json transcribe-stream`
+
+`voice2json transcribe-stream` pipes 16bit 16khz mono [WAV](https://datatracker.ietf.org/doc/html/rfc2361)s
+from a specified file or profile-configured record command
+<span>(defaults to [ALSA](https://alsa-project.org/wiki/Main_Page))</span>
+to the backend & formats it's output sentences with metadata inside
+[JSON Lines](https://jsonlines.org/) objects. To determine when a voice command
+ends it uses some sophisticated code [extracted](https://pypi.org/project/webrtcvad/)
+from *the* WebRTC implementation <span>(from Google)</span>.
+
+That 16khz audio sampling rate is interesting, it's far below the 44.1khz sampling
+rate typical for digital audio. Presumably this reduces the computational load
+whilst preserving the frequencies
+<span>(max 8khz per [Nyquist-Shannon](https://invidio.us/watch?v=cIQ9IXSUzuM))</span>
+typical of human speech.
+
+### `voice2json recognize-intent`
+
+To match this output to the grammar defined in `sentences.ini` Voice2JSON provides
+the `voice2json recognize-intent` command. This reads back in the compressed
+NetworkX NFA to find the best path, fuzzily or not, via
+[depth-first-search](https://www.techiedelight.com/depth-first-search) which matches
+each input sentence. Once it has that path it iterates over it to resolve & capture:
+
+1. Substitutions
+2. Conversions
+3. Tagged slots
+
+The resulting information from each of these passes is gathered & output as JSON Lines.
+
+In Rhapsode I apply a further fuzzy match, the same I've always used for keyboard input,
+via [Levenshtein Distance](https://devopedia.org/levenshtein-distance).
+
+### `voice2json wait-wake`
+
+To trigger Rhapsode to recognize a voice command you can either press a key <aside>(<kbd>spacebar</kbd>)</aside>
+or, to stick to pure voice control, saying a <q>wakeword</q> <aside>(currently <q>Hey Mycroft</q>).
+For this there's the `voice2json wait-wake` command.
+
+`voice2json wait-wake` pipes the same 16bit 16khz mono WAV audio as `voice2json transcribe-stream`
+into <span>(currently)</span> [Mycroft Precise](https://mycroft-ai.gitbook.io/docs/mycroft-technologies/precise)
+& applies some [edge detection](https://www.scilab.org/signal-edge-detection)
+to the output probabilities. Mycroft Precise, from the [Mycroft](https://mycroft.ai/)
+opensource voice assistant project, is a [Tensorflow](https://www.tensorflow.org/)
+[neuralnet](https://invidious.moomoo.me/watch?v=aircAruvnKk) converting
+[spectograms](https://home.cc.umanitoba.ca/~robh/howto.html) <span>(computed via
+[sonopy](https://pypi.org/project/sonopy/) or legacy
+[speechpy](https://pypi.org/project/speechpy/))</span> into probabilities.
+
+## Voice2JSON Stack
+Interpreting audio input into voice commands is a non-trivial task, combining the
+efforts of many projects. Last I checked Voice2JSON used the following projects to
+tackle various components of this challenge:
+
+* [Python](https://www.python.org/)
+* [Rhasspy](https://community.rhasspy.org/)
+* num2words
+* NetworkX
+* OpenFST
+* Phonetisaurus
+* opengrm
+* Mycroft Precise
+* Sonopy
+* SpeechPy
+
+And for the raw text-to-speech logic you can choose between:
+
+* PocketSphinx <span>(matches audio via several measures to a language model of a handful of types)</span>
+* Kaldi <span>(supports many more types of language models than PocketSphinx, including several neuralnet variants)</span>
+* DeepSpeech <span>(Tensorflow neuralnet, hard to constrain to a grammar)</span>
+* Julius <span>(word n-grams & context-dependant Hiden Markov Model via 2-pass [tree trellis search](https://dl.acm.org/doi/10.3115/116580.116591))</span>
+
+## Conclusion
+Rhapsode's use of Voice2JSON shows two things.
+
+First the web could be a **fantastic** auditory experience *if only* we weren't so
+reliant on [JavaScript](https://rhapsode.adrian.geek.nz/2021/01/23/why-html.html#why-not-javascript).
+
+Second there is *zero* reason for [Siri](https://www.apple.com/siri/), [Alexa](https://www.developer.amazon.com/en-US/alexa/),
+[Cortana](https://www.microsoft.com/en-us/cortana/), etc to offload their computation
+to [the cloud](https://grahamcluley.com/cloud-someone-elses-computer/). Voice recognition
+may not be a trivial task, but even modest consumer hardware are more than capable
+enough to do a good job at it.
+
+<style>span {voice-volume: soft;}</style>
A => _posts/2021-12-24-amphiarao.md +31 -0
@@ 1,31 @@
+---
+layout: post
+title: Introducing Amphiarao!
+author: Adrian Cochrane
+date: 2021-12-23 20:55:52 +1300
+---
+
+I've now implemented the core featureset of <span>(save for a [HURL](https://git.adrian.geek.nz/hurl.git/) crash which badly needs addressing)</span> a Rhapsode bug-compatible [webpage debugger](/amphiarao), compatible with [Selenium](https://www.selenium.dev/) & whatever your favourite web browser is. Even [Rhapsode](/), once I've implemented forms for it <span>(coming soon)</span>! All the features Selenium expects are implemented, though where they're not supported by Rhapsode I wrote noops.
+
+Amphiarao is implemented as a [Happstack](http://happstack.com/) locally-run non-persistant webservice exposing a [JSON](https://www.json.org/json-en.html) <span>(using the [`aeson`](https://hackage.haskell.org/package/aeson) module, named for the father of [Jason](http://www.perseus.tufts.edu/hopper/text?doc=Perseus:text:1999.01.0022:text=Library:book=1:chapter=8&highlight=jason) from Jason & The Argonauts)</span> API "[WebDriver](https://www.w3.org/TR/webdriver2/)" expected by Selenium & an [HTML](https://codeberg.org/Weblite/HTMLite/src/branch/master/htmlite.md) UI resembling the [Chrome](https://developer.chrome.com/docs/devtools/), [Firefox](https://developer.mozilla.org/en-US/docs/Tools/Page_Inspector), or especially [Safari](https://webkit.org/web-inspector/) WebInspectors you're familiar with. And loosely resembling the [elementary](https://elementary.io/) OS *"[Pantheon](https://github.com/elementary/stylesheet)"* desktop with the [Solarized](https://ethanschoonover.com/solarized/) syntax-highlighting theme, though feel free to contribue alternate [CSS themes](http://www.csszengarden.com/)! It builds upon the exact [same modules](https://git.adrian.geek.nz/) <span>(plus [`selectors`](https://hackage.haskell.org/package/selectors) & [`hxt-xpath`](https://hackage.haskell.org/package/hxt-xpath) for searching elements)</span> serverside Rhapsode would use clientside to render the HTML Amphiarao outputs to it.
+
+To ensure Amphiarao will sound good in Rhapsode <span>(again, once [forms](https://developer.mozilla.org/en-US/docs/Learn/Forms) are implemented in it)</span> Amphiarao takes care to use semantic markup, uses [no JavaScript](https://rhapsode.adrian.geek.nz/2021/01/23/why-html.html), & applies auditory styles to it's [syntax highlighting](https://buttondown.email/hillelwayne/archive/syntax-highlighting-is-a-waste-of-an-information/). These auditory styles ensure source code doesn't sound monotonous <span>(in other words: syntax highlighting may be useful visually, but it's crucial auditorally)</span> when read aloud. Since I already have the HTML & CSS parsed, I found it easy enough to have the code manually add the additional markup without pulling in a [dedicated syntax highlighter](https://adrian.geek.nz/docs/jekyll.html). Though I probably will utilize [`skylighting`](https://hackage.haskell.org/package/skylighting) later when I expand the UI featureset beyond what's expected by Selenium...
+
+## Why?
+I doubt Amphiarao will get much use by web developers wanting to ensure their pages work well in my efforts, though I'd *love* & welcome to be proven wrong!
+
+I do know that *I'd* use it, to aid my own testing of my browser engine stack as I extend it to tackle even more ambitious projects <span>(Something [visual](/tv)?)</span>! And just like existing WebInspectors are implemented in a way which *showcases* their browsers' capabilities, so does Amphiarao. Notably the [**universiality**](https://rhapsode.adrian.geek.nz/2020/10/31/why-auditory.html) where I can target multiple differing mediums with the exact same markup.
+
+## Visual Design
+I don't claim to be a strong visual designer, but I know what I like. I find borders to be visual noise I wish to avoid. I prefer communicating via [typography](https://practicaltypography.com/) & (which I largely implemented via [CSS Grid](https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_Grid_Layout)) layout, though when not overused differing backgrounds can be nice and useful too.
+
+I copied <span>([rigorously](https://observer.com/2015/02/meet-the-man-behind-solarized-the-most-important-color-scheme-in-computer-history/)-[choosen](https://blog.elementary.io/look-and-feel-changes-elementary-os-6/))</span> colour schemes from elementary & Solarized because:
+
+1. I use them everyday & love them
+2. I don't know how to choose my own
+
+**Hey**, if WebKit's WebInspector can look like Mac OS X everywhere, Amphiarao can look like elementary everywhere! And yes, there is a darkmode.
+
+But if you have different visual-design tastes than mine, all it'd take is swapping out the CSS. Which I'll ensure is trivial for readers to do in any of my browser engines.
+
+<style>span {voice-volume: soft}</style>
A => _posts/2022-07-04-op-parsing.md +53 -0
@@ 1,53 @@
+---
+layout: post
+title: Operator Parsing in Haskell
+author: Adrian Cochrane
+date: 2022-07-04 19:48:15 +1200
+---
+
+Plenty has happened since I last blogged here, including banging my head against Haskell's text rendering libraries (hoped to be blogging about that...) & rejigging my plans for Rhapsode & similar browser engines. Now I'm actively taking steps towards turning Rhapsode into a decent browser you can install as an app from your package repositories, as opposed to a browser engine waiting for others to build upon. As part of that I'm [commisioning art](https://www.artstation.com/betalars) to have something to display onscreen!
+
+This display will be delivered as a [Parametric SVG](http://parametric-svg.js.org/) file so I wrote [some code](https://git.adrian.geek.nz/parametric-svg.git/tree/) to implement that SVG extension such that I can render it in a [GTK UI](https://gtk.org/). This code, written in [Haskell](https://www.haskell.org/) to reuse the [same XML parser](https://hackage.haskell.org/package/xml-conduit) I've been using & bridged over into [Vala](https://wiki.gnome.org/Projects/Vala) via their C foreign-function interfaces, lowers the parametric aspects before handing it off to [GDK Pixbuf](http://docs.gtk.org/gdk-pixbuf/) & [rSVG](https://wiki.gnome.org/Projects/LibRsvg).
+
+It does so by traversing the XML tree (exposed as strings to Vala) interpreting every attribute with a supported namespace, generating a value to store in the corresponding unnamespaced attribute. Overriding the attribute if it already exists. The interpretor, where the bulk of the logic is, references a mapping of values populated from the Vala code.
+
+## Lexing
+Interpreting a formula is split into 3 phases: lex, parse, & evaluate. Lexing involves using [`reads`](https://hackage.haskell.org/package/base-4.16.1.0/docs/Prelude.html#v:reads), [`span`](https://hackage.haskell.org/package/base-4.16.1.0/docs/Prelude.html#v:span), & [character-checks](https://hackage.haskell.org/package/base-4.16.1.0/docs/Data-Char.html) to split the input text up into numbers, backtick-quoted strings, alphabetical identifiers, symbols, & equals-suffixed (comparison) symbols whilst ignoring whitespace. These all become strings again when parsed into an operator.
+
+Lexing strings is more involved since:
+
+* They are where lexing errors can occur.
+* Parametric SVG specifies a templating syntax by which the textualized result of some subformula can be spliced into the quoted text.
+
+To lex these string templates my implementation utilizes a co-recursive sublexer which treats this as syntactic sugar for a parenthesized `#` concatenation operator. The normal lexer handles `}` specially to hand the trailing text back to the template-string lexer.
+
+## Parsing
+Parsing is done using a [Pratt](http://crockford.com/javascript/tdop/tdop.html)/[TDOP](https://eli.thegreenplace.net/2010/01/02/top-down-operator-precedence-parsing/) parser, which was a pleasure to write in Haskell! Since [pattern-matching](https://hackingwithhaskell.com/basics/pattern-matching/) allowed me to more cleanly separate the layers-of-abstraction (parser vs lexer) than in a typical OO language, & Haskell's normal [parsing](https://two-wrongs.com/parser-combinators-parsing-for-haskell-beginners.html) weakness wasn't as much of a problem here.
+
+A Top-Down Operator Precedance (TDOP) parser is a parsing technique very well suited to operator-heavy languages like mathematics (hence why I desugared template-strings!). It is centred around a core [reentrant](https://en.wikipedia.org/wiki/Reentrancy_(computing)) loop which parses an initial token & all subsequant tokens with a greater "binding power". Helper functions/methods are used to parse the initial token(s) (`nud`), parse subsequant tokens (`led`), & determine the binding power for those tokens (`lbp`):
+
+ parse' rbp (tok:toks) = go $ nud tok toks
+ where
+ go (left, tok':toks')
+ | rbp < lbp tok' = go $ led left tok' toks'
+ | otherwise = (left, tok':toks')
+
+This is paired with a helper function to simplify recursive calls:
+
+ parseWith rbp toks cb = let (expr, toks') = parse' rbp toks in (cb expr, toks')
+
+Usually the `led` function, & to a lesser degree the `nud` function, wraps a recursive call back into that core loop whilst constructing an AST (Abstract Syntax Tree). Though there are exceptions for literal values, variable names, & postfix operators like `!` for computing [factorials](https://www.cuemath.com/numbers/factorial/). Also parentheses messes with the binding powers & expects to be closed, whether used as a prefix or (for function calls) infix operator.
+
+To close all in-progress loops infinite EOF (which has the lowest binding power, alongside close-paren) tokens are appended to the lexing results. Unknown operators also have zero precedance such that they'll stop the parser & allow for their syntax errors to be detected.
+
+## Evaluation
+That parser yields a [binary tree](https://www.programiz.com/dsa/binary-tree) to be [postorder-traversed](https://www.programiz.com/dsa/tree-traversal) (if the parser didn't emit any `Error`s) to yield boolean, floating point number, or string value. Or (upon failed typechecks or unknown variables) nulls. A Haskell-datatype is defined that can hold any of those. Numbers can be converted to strings, whilst booleans & nulls yields further nulls instead.
+
+Usually it calls one of two helper function to map each operator/function to the corresponding Haskell-equivalent. Though the ternary (`condition ? iftrue : iffalse`) operator is handled specially even if `?` & `:` aren't by the parser. Same for the `,` (which I ensured was [right-associative](https://en.wikipedia.org/wiki/Operator_associativity)) & `(` operators function calls consist of, these get restructured into a function name & list of pre-evaluated argument values to simplify the helper function.
+
+The parser has minimal understanding of the different operators, mostly just enough to assign them a binding power.
+
+## Conclusion
+I don't know whether Parametric SVG is a feature I'll ever expose to webdevs, but I enjoyed implementing it. And it makes it easier to design programmatic art which'll both make Rhapsode's visuals more interesting & more accessible. I look forward to sharing these visuals once they are ready!
+
+Atypically Rhapsode's experience has been designed audio-first, with visuals only added due to the requirements of most app platforms!
A => _posts/Pango-name.svg.png +0 -0
A => blog.atom +19 -0
@@ 1,19 @@
+---
+---
+<?xml version="1.0" encoding="utf-8"?>
+<feed xmlns="http://www.w3.org/2005/Atom">
+ <title>Argonaut Constellation DevBlog</title>
+ <updated>{{ site.time|date_to_xml_schema }}</updated>
+ <author>
+ <name>Adrian Cochrane</name>
+ <email>adrian@openwork.nz</email>
+ </author>
+ <complete />
+
+ {% for post in site.posts %}<entry>
+ <title>{{ post.title|xml_escape }}</title>
+ <link href="{{ post.url|absolute_url }}" />
+ <updated>{{ post.date|date_to_xml_schema }}</updated>
+ <summary type="xhtml">{{ post.excerpt }}</summary>
+ </entry>{% endfor %}
+</feed>
A => blog.html +24 -0
@@ 1,24 @@
+---
+---
+<!DOCTYPE html>
+<html>
+<head>
+ <meta charset=utf-8 />
+ <meta name="viewport" content="width=device-width, initial-scale=1" />
+ <title>Argonaut Projects' Blog</title>
+ <link rel="stylesheet" href="https://cdn.simplecss.org/simple.css" />
+ <link rel="alternate" type="application/atom+xml" href="blog.atom" />
+</head>
+<body>
+ <h1><a href="/">Argonaut</a> Projects' Blog</h1>
+ <nav><a href="blog.atom" title="Subscribe"><img alt="Webfeed" src="/rss.svg"></a>
+ <a href="https://feedrabbit.com/subscriptions/new?url=https://rhapsode.adrian.geek.nz/blog.atom" title="Subscribe via eMail & FeedRabbit"><img src="email.svg" alt="eMail" /></a></nav>
+
+ {% for post in site.posts %}
+ <h2><a href='{{ post.url | relative_url }}'>{{ post.title }}</a></h2>
+ <aside>By {{ post.author }} on <time>{{ post.date | date: "%b %-d, %Y" }}</time></aside>
+
+ {{ post.excerpt }}
+ {% endfor %}
+</body>
+</html>
A => espeak.png +0 -0
A => freetype.png +0 -0
A => haskell.png +0 -0
A => index.html +152 -0
@@ 1,152 @@
+<!DOCTYPE html>
+<html>
+<head>
+ <meta encoding="utf-8" />
+ <title>Argonaut Constellation</title>
+ <style>
+ body {
+ text-align: center;
+ font-family: sans-serif;
+ }
+ main {
+ display: flex;
+ flex-wrap: wrap;
+ }
+ section {
+ flex: 1;
+ margin: 15px;
+ min-width: 40ch;
+ }
+ figure {
+ text-align: left;
+ margin: 10px;
+ }
+ figure img {float: left;}
+ </style>
+</head>
+<body>
+ <h1>The Argonaut Constellation</h1>
+ <p>The "Argonaut Constellation" refers to a range of software projects aiming
+ to illustrate the potential for a more private JavaScript-free web to work
+ better for anyone on any conceivable device.</p>
+ <main>
+ <section>
+ <h2>Argonaut Suite</h2>
+ <p>The "Argonaut Suite" refers to browsers, browser-engines, & the like
+ targetting different mediums.</p>
+
+ <figure>
+ <img src="rhapsode.png" alt="Purple & pink bat hugging the globe whilst
+ emitting a blue sonar signal." />
+ <h3><a href="https://www.rhapsode-web.org/">Rhapsode</a></h3>
+ <p>Auditory browser</p>
+ </figure>
+ <figure>
+ <h3><a href="https://www.rhapsode-web.org/amphiarao">Amphiarao</a></h3>
+ <p>Webpage debugger</p>
+ <p>Compatible with Selenium or any browser supporting webforms.</p>
+ </figure>
+ <figure>
+ <h3><a href="https://haphaestus.org/">Haphaestus</a></h3>
+ <p>Visual browser designed for use with TV remotes. (in-progress)</p>
+ </figure>
+ </section>
+ <section>
+ <h2>Argonaut Stack</h2>
+ <p>The "Argonaut Stack" refers to reusable
+ <a href="https://haskell.org/">Haskell</a> modules implementing
+ the different tasks required to render a webpage.</p>
+
+ <figure>
+ <h3><a href="https://hackage.haskell.org/package/hurl">HURL</a></h3>
+ <p>Haskell URL resolver</p>
+ </figure>
+ <figure>
+ <h3><a href="https://hackage.haskell.org/package/stylist">Haskell Stylist</a></h3>
+ <p>CSS styling engine</p>
+ </figure>
+ <figure>
+ <h3><a href="https://hackage.haskell.org/package/harfbuzz-pure">Harfbuzz-Pure</a></h3>
+ <p><a href="">Harfbuzz</a> language-bindings</p>
+ </figure>
+ <p><em>More to come!</em></p>
+ </section>
+ <section>
+ <h2>Friends</h2>
+ <p>These are <em>some</em> 3rd party projects we make heavy use of! <br />
+ Thank you!</p>
+
+ <figure>
+ <img src="haskell.png"
+ alt="Purple '>>=' operator combined with a lambda character." />
+ <h3><a href="https://haskell.org/">Haskell</a></h3>
+ <p>Pure-functional programming language</p>
+ </figure>
+ <figure>
+ <h3><a href="https://hackage.haskell.org/package/text">Text</a></h3>
+ </figure>
+ <figure>
+ <h3><a href="https://hackage.haskell.org/package/bytestring">ByteString</a></h3>
+ </figure>
+ <hr />
+ <figure>
+ <h3><a href="https://hackage.haskell.org/package/network-uri">Network URI</a></h3>
+ </figure>
+ <figure>
+ <h3><a href="https://hackage.haskell.org/package/http-client-tls">
+ http-client-tls</a></h3>
+ </figure>
+ <figure>
+ <h3><a href="https://hackage.haskell.org/package/xml-conduit">XML Conduit</a></h3>
+ </figure>
+ <figure>
+ <h3><a href="https://hackage.haskell.org/package/css-syntax">Haskell CSS Syntax
+ </a></h3>
+ </figure>
+ <hr />
+ <figure>
+ <img src="espeak.png" alt="red-coloured lips forming an open mouth,
+ labeled beneath as 'eSpeak'." />
+ <h3><a href="https://github.com/espeak-ng/espeak-ng">eSpeak NG</a></h3>
+ <p>Text-to-speech</p>
+ </figure>
+ <figure>
+ <img src="voice2json.png" alt="Green curly brackets, pipes, & colon characters
+ sized to resemble a sinasoidal soundwave." />
+ <h3><a href="https://voice2json.org/">Voice2JSON</a></h3>
+ <p>Speech-to-text + intent parsing</p>
+ </figure>
+ <hr />
+ <figure>
+ <h3><a href="https://fontconfig.org/">FontConfig</a></h3>
+ <p>Selects system or app fontfiles</p>
+ </figure>
+ <figure>
+ <img src="HarfBuzz.png" alt="The name 'Harfbuzz' written in Persian." />
+ <h3><a href="https://harfbuzz.github.io/">Harfbuzz</a></h3>
+ <p>Selects "glyphs" from a font to represent the text</p>
+ <p>Crucial for internationalized text rendering</p>
+ </figure>
+ <figure>
+ <img src="freetype.png" alt="The words 'the FreeType Project' with a
+ blue calligraphic background." />
+ <h3><a href="https://freetype.org/">FreeType</a></h3>
+ <p>Parses, queries, & rasterizes fontfiles</p>
+ </figure>
+ <hr />
+ <figure>
+ <img src="JuicyPixels.png" alt="a pixelated cartoon orange with green leaf." />
+ <h3><a href="https://hackage.haskell.org/package/JuicyPixels">Juicy Pixels
+ </a></h3>
+ <p>Image loading/saving</p>
+ </figure>
+ <figure>
+ <h3><a href="https://hackage.haskell.org/package/rasterific-svg">
+ Rasterific SVG</a></h3>
+ <p>Renders SVG images</p>
+ </figure>
+ <p><em>And so much <strong>more</strong>!</em></p>
+ </section>
+ </main>
+</body>
+</html>
A => rhapsode.png +0 -0
A => upload.sh +5 -0
@@ 1,5 @@
+#! /bin/sh
+git add -p
+git commit
+jekyll build
+scp -r _site/* alcinnz@219.88.233.43:/var/www/argonaut-constellation.org
A => voice2json.png +0 -0