haku - writing a little programming language for fun

  • I’ve had this idea on my mind as of late, of a little pure functional programming language that would run in your browser.

    2024-07-24
    • the primary use case would be writing fun audiovisual sketches you can inspect and edit live, because after all everything is declarative. this was motivated by my discovery of glisp, which was recently on the front page of Lobsters.

      2024-07-24
      • I even commented about it!

        2024-07-24
  • so let’s get going!

    2024-07-24
  • parsing

    2024-07-24
    • I don’t know about you, but I like writing parsers. however, since I’m trying to keep this language absolutely tiny, I think S-expressions might be the best fit for this purpose.

      2024-07-24
      • honestly I don’t even like S-expressions that much. I find them extremely hard to read, but I dunno - maybe my mind will change after having written a language using them. we can always swap the syntax out for something else later.

        2024-07-24
    • let me show you an example of how I’d like haku to look. I find that is the best way of breaking down syntax into smaller parts.

      ; Recursive fibonacci
      (def fib
          (fn (n)
              (if (< n 2)
                  n
                  (+ (fib (- n 1)) (fib (- n 2))))))
      
      (print (fib 10))
      
      2024-07-24
      • we have a handful of lexical elements: parentheses, identifiers, and numbers.

        there are also comments and whitespace, of course. those will get skipped over by the lexer, because we’re not really building a production-grade language to need them.

        2024-07-24
      • syntactically, we only really have two types of productions. there are literals, and there are lists.

        2024-07-24
        • when I say literals, I’m referring to both identifiers and integers. we will of course differentiate between them in the syntax, because they mean different things.

          2024-07-24
    • we will start by writing the lexical analysis part of our parser, to join single characters up to slightly more managable pieces.

      export const lexer = {};
      2024-07-24
    • the entire idea of a lexer is that you read the input string left to right, top to bottom, and piece together larger tokens out of that.

      2024-07-24
      • for instance, for the input string

        (example s-expression)
        

        we will produce the tokens

        type start end text
        ( 0 1 (
        identifier 1 8 example
        identifier 9 21 s-expression
        ) 21 22 )
        end of file 22 22
        2024-07-24
    • to lex the input into tokens, we’ll need to know the input string (of course), and where we currently are in the string.

      lexer.init = (input) => { return { input, position: 0, }; };
      2024-07-24
    • we’ll also define a few helper functions to make reading text a little easier, without having to perform any bounds checks whenever we read tokens.

      export const eof = "end of file"; lexer.current = (state) => { return state.position < state.input.length ? state.input.charAt(state.position) : eof; }; lexer.advance = (state) => ++state.position;
      2024-07-24
    • our lexer will run in a loop, producing tokens until it hits the end of input or an error.

      export function lex(input) { let tokens = []; let state = lexer.init(input); while (true) { let start = state.position; let kind = lexer.nextToken(state); let end = state.position; tokens.push({ kind, start, end }); if (kind == eof || kind == "error") break; } return tokens; }
      2024-07-24
      • remember that error handling is important! we mustn’t forget that the user can produce invalid input - such as this string:

        {example}
        

        haku does not have curly braces in its syntax, so that’s clearly an error! reporting this to the user will be a much better experience than, perhaps… getting stuck in an infinite loop. oh

        2024-07-24
    • now for the most important part - that lexer.nextToken we used will be responsible for reading back a token from the input, and returning what kind of token it has read.

      for now, let’s make it detect parentheses. we of course also need to handle end of input - whenever our lexer runs out of characters to consume, as well as when it encounters any characters we don’t expect.

      lexer.nextToken = (state) => { let c = lexer.current(state); if (c == "(" || c == ")") { lexer.advance(state); return c; } if (c == eof) return eof; lexer.advance(state); return "error"; };
      2024-07-24
    • with all that frameworking in place, let’s test if our lexer works!

      export function printTokens(input) { let tokens = lex(input); for (let { kind, start, end } of tokens) { if (kind == "error") { let errorString = input.substring(start, end); console.log(`unexpected characters at ${start}..${end}: '${errorString}'`); } else { console.log(`${kind} @ ${start}..${end}`); } } } printTokens(`()((()))`);
      ( @ 0..1
      ) @ 1..2
      ( @ 2..3
      ( @ 3..4
      ( @ 4..5
      ) @ 5..6
      ) @ 6..7
      ) @ 7..8
      end of file @ 8..8
      

      …seems pretty perfect!

      2024-07-24
    • except, of course, we’re not handling whitespace or comments.

      printTokens(`( )`);
      ( @ 0..1
      unexpected characters at 1..2: ' '
      
      2024-07-24
    • so let’s write another function that will lex those.

      lexer.skipWhitespaceAndComments = (state) => { while (true) { let c = lexer.current(state); if (c == " " || c == "\t" || c == "\n" || c == "\r") { lexer.advance(state); continue; } if (c == ";") { while ( lexer.current(state) != "\n" && lexer.current(state) != eof ) { lexer.advance(state); } lexer.advance(state); // skip over newline, too continue; } break; } };
      2024-07-24
    • except instead of looking at whitespace and comments in the main token reading function, we’ll do that outside of it, to avoid getting whitespace caught up in the actual tokens’ start..end spans.

      export function lex(input) { let tokens = []; let state = lexer.init(input); while (true) { lexer.skipWhitespaceAndComments(state); // <-- let start = state.position; let kind = lexer.nextToken(state); let end = state.position; tokens.push({ kind, start, end }); if (kind == eof || kind == "error") break; } return tokens; }
      2024-07-24
    • now if we look at the output…

      printTokens(`( )`);
      ( @ 0..1
      ) @ 2..3
      end of file @ 3..3
      

      the whitespace is ignored just fine!

      2024-07-24
      • and comments of course follow:

        printTokens(` ( ; comment comment! ) `);
        ( @ 5..6
        ) @ 30..31
        end of file @ 32..32
        
        2024-07-24
    • it’d be really nice if we could use identifiers though…

      printTokens(`(hello world)`);
      ( @ 0..1
      unexpected characters at 1..2: 'h'
      

      so I guess that’s the next thing on our TODO list!

      2024-07-24
    • we’ll introduce a function that will tell us if a given character is a valid character in an identifier.

      since S-expressions are so minimal, it is typical to allow all sorts of characters in identifiers - in our case, we’ll allow alphanumerics, as well as a bunch of symbols that seem useful. and funky!

      export const isIdentifier = (c) => /^[a-zA-Z0-9+~!@$%^&*=<>+?/.,:\\|-]$/.test(c);
      2024-07-24
      • this could probably be a whole lot faster if I had used a simple c >= 'a' && c <= 'z' chain, but I’m lazy, so a regex it is.

        2024-07-24
      • when I said funky, I wasn’t joking - have you ever seen , in an identifier?

        2024-07-24
        • I’m allowing it since it isn’t really gonna hurt anything. I did disallow # though, because that’s commonly used for various extensions. who knows what I might be able to cram under that symbol!

          2024-07-24
    • with a character set established, we can now stuff identifiers into our lexer. I’ll start by introducing a function that’ll chew as many characters that meet a given condition as it can:

      lexer.advanceWhile = (state, fn) => { while (fn(lexer.current(state))) { lexer.advance(state); } };
      2024-07-24
    • now we can add identifiers to nextToken:

      lexer.nextToken = (state) => { let c = lexer.current(state); if (isIdentifier(c)) { lexer.advanceWhile(state, isIdentifier); return "identifier"; } if (c == "(" || c == ")") { lexer.advance(state); return c; } if (c == eof) return eof; lexer.advance(state); return "error"; };
      2024-07-24
    • let’s try lexing that (hello world) string now.

      printTokens(`(hello world)`);
      ( @ 0..1
      identifier @ 1..6
      identifier @ 7..12
      ) @ 12..13
      end of file @ 13..13
      

      nice!

      2024-07-24
    • in the original example, there were also a couple of numbers:

      (+ (fib (- n 1)) (fib (- n 2)))
      

      so let’s also add support for some basic integers; we’ll add decimals later if we ever need them.

      2024-07-24
    • defining integers is going to be a similar errand to identifiers, so I’ll spare you the details and just dump all the code at you:

      export const isDigit = (c) => c >= "0" && c <= "9"; lexer.nextToken = (state) => { let c = lexer.current(state); if (isDigit(c)) { lexer.advanceWhile(state, isDigit); return "integer"; } if (isIdentifier(c)) { lexer.advanceWhile(state, isIdentifier); return "identifier"; } if (c == "(" || c == ")") { lexer.advance(state); return c; } if (c == eof) return eof; lexer.advance(state); return "error"; };
      2024-07-24
      • note how we check isDigit before isIdentifier - this is really important, because otherwise identifiers would take precedence over integers!

        2024-07-24
    • now let’s see the results of all that hard work.

      printTokens(`(fib (- n 1))`);
      ( @ 0..1
      identifier @ 1..4
      ( @ 5..6
      identifier @ 6..7
      identifier @ 8..9
      integer @ 10..11
      ) @ 11..12
      ) @ 12..13
      end of file @ 13..13
      

      looks good!

      2024-07-24
    • an amen break

      2024-07-24
      • to let your head rest a bit after reading all of this, here are some fun numbers:

        2024-07-24
        • there are a total of

          console.log(Object.keys(lexer).length);
          6
          

          functions in the lexer namespace.

          not a whole lot, huh?

          2024-07-24
        • I was personally quite surprised how tiny an S-expression lexer can be. they were right about S-expressions being a good alternative for when you don’t want to write syntax!

          the entire thing fits in 86 lines of code.

          2024-07-24
      • :bulb: for the curious: here’s why I implement lexers like this!

        2024-07-24
        • many tutorials will have you implementing lexers such that data is parsed into the language’s data types. for instance, integer tokens would be parsed into JavaScript numbers.

          I don’t like this approach for a couple reasons.

          2024-07-24
          • pre-parsing data like this pollutes your lexer code with wrangling tokens into useful data types. I prefer it if the lexer is only responsible for reading back strings.

            implemented my way, it can concern itself only with chewing through the source string; no need to extract substrings out of the input or anything.

            2024-07-24
          • there’s also a performance boost from implementing it this way: lazy parsing, as I like to call it, allows us to defer most of the parsing work until it’s actually needed. if the token never ends up being needed (e.g. due to a syntax error), we don’t end up doing extra work eagerly!

            2024-07-24
            • if that doesn’t convince you, consider that now all your tokens are the exact same data structure, and you can pack them neatly into a flat array.

              if you’re using a programming language with flat arrays, that is. such as Rust or C.

              I’m implementing this in JavaScript of course, but it’s still neat not having to deal with mass ifosis when extracting data from tokens - you’re always guaranteed a token will have a kind, start, and end.

              2024-07-24
      • now. back to your regularly scheduled programming!

        2024-07-24
    • it’s time for us to implement a parser for our S-expressions.

      export const parser = {};
      2024-07-24
    • the goal is to go from this flat list of tokens:

      type start end text
      ( 0 1 (
      identifier 1 8 example
      identifier 9 21 s-expression
      ) 21 22 )
      end of file 22 22

      to a nice recursive tree that represents our S-expressions:

      list
          identifier example
          identifier s-expression
      
      2024-07-24
    • there are many parsing strategies we could go with, but in my experience you can’t go simpler than good ol’ recursive descent.

      2024-07-24
      • the idea of recursive descent is that you have a stream of tokens that you read from left to right, and you have a set of functions that parse your non-terminals.

        essentially, each function corresponds to a single type of node in your syntax tree.

        2024-07-24
        • does the “stream of tokens that you read from left to right” ring a bell? if it does, that’s because lexing operates on a very similar process - it’s just non-recursive!

          2024-07-24
    • knowing that similarity, we’ll start off with a similar set of helper functions to our lexer.

      parser.init = (tokens) => { return { tokens, position: 0, }; }; parser.current = (state) => state.tokens[state.position]; parser.advance = (state) => { if (state.position < state.tokens.length - 1) { ++state.position; } };

      note however that instead of letting current read out of bounds, we instead clamp advance to the very last token - which is guaranteed to be end of file.

      2024-07-24
    • the S-expression grammar can compose in the following ways:

      2024-07-24
      • an S-expression is a literal integer, identifier, or a list.

        2024-07-24
      • literal integers 65 and identifiers owo stand alone on their own. they do not nest anything else inside of them.

        2024-07-24
      • lists (a b c) are sequences of S-expressions enclosed in parentheses. inside, they can contain literal integers and identifiers, or even other lists recursively.

        2024-07-24
    • this yields the following EBNF grammar:

      Expr = "integer" | "identifier" | List;
      List = "(" , { Expr } , ")";
      
      2024-07-24
    • we’ll start by implementing the Expr = "integer" | "identifier" rule. parsing integers and identifiers is as simple as reading their single token, and returning a node for it:

      parser.parseExpr = (state) => { let token = parser.current(state); switch (token.kind) { case "integer": case "identifier": parser.advance(state); return { ...token }; default: parser.advance(state); return { kind: "error", error: "unexpected token", start: token.start, end: token.end, }; } };
      2024-07-24
      • of course again, we mustn’t forget about errors! it’s totally possible for our lexer to produce a token we don’t understand - such as an error, or an end of file. or really any token we choose to introduce in the future, but choose to not be valid as an Expr starter.

        2024-07-24
    • we’ll wrap initialization and parseExpr in another function, which will accept a list of tokens and return a syntax tree, hiding the complexity of managing the parser state underneath.

      parser.parseRoot = (state) => parser.parseExpr(state); export function parse(input) { let state = parser.init(input); let expr = parser.parseRoot(state); if (parser.current(state).kind != eof) { let strayToken = parser.current(state); return { kind: "error", error: `found stray '${strayToken.kind}' token after expression`, start: strayToken.start, end: strayToken.end, }; } return expr; }

      this function also checks that there aren’t any tokens after we’re done parsing the root Expr production. it wouldn’t be very nice UX if we let the user input tokens that didn’t do anything!

      2024-07-24
      • I’m adding that parseRoot alias in so that it’s easy to swap the root production to something else than Expr.

        2024-07-24
    • now we can try to parse a tree out of a little expression…

      export function printTree(input) { let tokens = lex(input); let tree = parse(tokens); console.log(JSON.stringify(tree, null, " ")); }

      …and print it into the console:

      printTree("-w-")
      {
          "kind": "identifier",
          "start": 0,
          "end": 3
      }
      

      nice!

      2024-07-24
    • now it’s time to parse some lists. for that, we’ll introduce another function, which will be called by parseExpr with an existing ( token.

      its task will be to read as many expressions as it can, until it hits a closing parenthesis ), and then construct a node out of that.

      parser.parseList = (state, leftParen) => { parser.advance(state); let children = []; while (parser.current(state).kind != ")") { if (parser.current(state).kind == eof) { return { kind: "error", error: "missing closing parenthesis ')'", start: leftParen.start, end: leftParen.end, }; } children.push(parser.parseExpr(state)); } let rightParen = parser.current(state); parser.advance(state); return { kind: "list", children, start: leftParen.start, end: rightParen.end, }; };
      2024-07-24
    • and the last thing left to do is to hook it up to our parseExpr, in response to a ( token:

      parser.parseExpr = (state) => { let token = parser.current(state); switch (token.kind) { case "integer": case "identifier": parser.advance(state); return { ...token }; case "(": return parser.parseList(state, token); // <-- default: parser.advance(state); return { kind: "error", error: "unexpected token", start: token.start, end: token.end, }; } };
      2024-07-24
    • now let’s try parsing an S-expression!

      printTree("(hello! ^^ (nested nest))");
      {
          "kind": "list",
          "children": [
              {
                  "kind": "identifier",
                  "start": 1,
                  "end": 7
              },
              {
                  "kind": "identifier",
                  "start": 8,
                  "end": 10
              },
              {
                  "kind": "list",
                  "children": [
                      {
                          "kind": "identifier",
                          "start": 12,
                          "end": 18
                      },
                      {
                          "kind": "identifier",
                          "start": 19,
                          "end": 23
                      }
                  ],
                  "start": 11,
                  "end": 24
              }
          ],
          "start": 0,
          "end": 25
      }
      
      2024-07-24
    • I don’t know about you, but I personally find the JSON output quite distracting and long. I can’t imagine how long it’ll be on even larger expressions!

      to counteract that, let’s write an S-expression pretty printer:

      export function exprToString(expr, input) { let inputSubstring = input.substring(expr.start, expr.end); switch (expr.kind) { case "integer": case "identifier": return inputSubstring; case "list": return `(${expr.children.map((expr) => exprToString(expr, input)).join(" ")})`; case "error": return `<error ${expr.start}..${expr.end} '${inputSubstring}': ${expr.error}>`; } }
      2024-07-24
      • obviously this loses some information compared to the JSON - we no longer report start and end indices, but that is easy enough to add if you need it. I don’t need it, so I’ll conveniently skip it for now.

        2024-07-24
    • let’s see if our pretty printer works!

      export function printTree(input) { let tokens = lex(input); let tree = parse(tokens); console.log(exprToString(tree, input)); } printTree("(hello! -w- (nestedy nest))");
      (hello! -w- (nestedy nest))
      

      that’s… the same string.

      2024-07-24
    • let’s try something more complicated, with comments and such.

      export function printTree(input) { let tokens = lex(input); let tree = parse(tokens); console.log(exprToString(tree, input)); } printTree(` (def add-two ; Add two to a number. (fn (n) (+ n 2))) `);
      (def add-two (fn (n) (+ n 2)))
      

      looks like it works!

      2024-07-24
      • of course this is hardly the prettiest printer in the world.

        2024-07-24
        • for one, it does not even preserve your comments.

          2024-07-24
        • it does not add indentation either, it just blindly dumps a minimal S-expression into the console.

          2024-07-24
        • but it proves that our parser works - we’re able to parse an arbitrary S-expression into a syntax tree, and then traverse that syntax tree again, performing various recursive algorithms on it. isn’t that cool?

          2024-07-24
    • and that’s all there’ll be to parsing, at least for now!

      2024-07-24
      • maybe in the future I’ll come up with something more complex, with a more human-friendly syntax. who knows! right now it’s experimentation time, so these things don’t really matter.

        2024-07-24
    • amen break, part two

      2024-07-24
      • the S-expression parser consists of a whopping

        console.log(Object.keys(parser).length);
        6
        

        functions. just like the lexer!

        2024-07-24
      • the parser is 99 lines of code. quite tiny, if you ask me!

        2024-07-24
        • together with the lexer, the entire S-expression parser is 185 lines of JavaScript. that’s a pretty small amount, especially given that it’s extremely simple code!

          2024-07-24
        • I wouldn’t call this parser production-ready, though. a production-ready parser would have some way of preserving comments inside the syntax tree, such that you can pretty-print it losslessly.

          if you’re bored, you can try to add that in!

          2024-07-24
      • here’s a fun piece of trivia: I’m wrote a Nim S-expression parser for Rosetta Code way back in July 2019.

        2024-07-24
        • you can see it’s quite different from how I wrote this parser - in particular, because I didn’t need to focus so much on the parser being hot-patchable and reusable, it came out quite a lot more compact, despite having fully static types!

          2024-07-24
        • it’s definitely not how I would write a parser nowadays. it’s pretty similar, but the syntax tree structures are quite different - it doesn’t use the lazy parsing trick I talked about before.

          2024-07-24
          • I mean, it’s only a trick I learned last year!

            2024-07-24
        • code style-wise it’s also not my prettiest Nim code ever - it kind of abuses templates for referring to the current character with a single word, but that doesn’t convey the fact that it’s an effectful operation very well.

          2024-07-24
  • interpretation

    2024-07-25
    • with a parser now ready, it would be nice if we could execute some actual code!

      2024-07-25
    • we’ll again start off by setting a goal. I want to be able to evaluate arbitrary arithmetic expressions, like this one:

      (+ (* 2 1) 1 (/ 6 2) (- 10 3))
      
      2024-07-25
    • the simplest way to get some code up and running would be to write a tree-walk interpreter.

      export const treewalk = {};

      this kind of interpreter is actually really simple! it just involves walking through your syntax tree, executing each node one by one.

      2024-07-25
    • we’ll again start off by defining a function that initializes our interpreter’s state.

      right now there isn’t really anything to initialize, but recall that we don’t have our tokens parsed into any meaningful data yet, so we’ll have to have access the source string to do that.

      treewalk.init = (input) => { return { input }; };
      2024-07-25
    • the core of our interpretation will be a function that descends down the node tree and evaluates each node, giving us a result.

      treewalk.eval = (state, node) => { switch (node.kind) { default: throw new Error(`unhandled node kind: ${node.kind}`); } };

      for now we’ll leave it empty.

      2024-07-25
    • in the meantime, let’s prepare a couple convenient little wrappers to run our code:

      export function run(input, node) { let state = treewalk.init(input); return treewalk.eval(state, node); } export function printEvalResult(input) { try { let tokens = lex(input); let ast = parse(tokens); let result = run(input, ast); console.log(result); } catch (error) { console.log(error.toString()); } }
      2024-07-25
    • now we can try running some code! let’s see what happens.

      printEvalResult("65");
      Error: unhandled node kind: integer
      

      …of course.

      2024-07-25
    • so let’s patch those integers in!

      this is where we’ll need that source string of ours - we don’t actually have a JavaScript number representation of the integers, so we’ll need to parse them into place.

      treewalk.eval = (state, node) => { switch (node.kind) { case "integer": let sourceString = state.input.substring(node.start, node.end); return parseInt(sourceString); default: throw new Error(`unhandled node kind: ${node.kind}`); } };
      2024-07-25
    • now when we run the program above…

      printEvalResult("65");
      65
      

      we get sixty five!

      2024-07-25
    • but that’s of course a bit boring - it would be nice if we could like, y’know, perform some arithmetic.

      2024-07-25
    • traditionally, in Lisp-like languages, a list expression always represents a function application, with the head of the list being the function to call, and the tail of the function being the arguments to apply to the function.

      let’s implement that logic then!

      export const builtins = {}; treewalk.eval = (state, node) => { switch (node.kind) { case "integer": let sourceString = state.input.substring(node.start, node.end); return parseInt(sourceString); case "list": // <-- let functionToCall = node.children[0]; let builtin = builtins[state.input.substring(functionToCall.start, functionToCall.end)]; return builtin(state, node); default: throw new Error(`unhandled node kind: ${node.kind}`); } };
      2024-07-25
      • we’m putting all of our built-in magic functions into a separate object builtins, so that they’re easy to patch partially later. you’ve seen my tricks already with hot-patching functions in objects, so this shouldn’t be too surprising.

        2024-07-25
      • you’ll note I’m kind of cheating here - because we have no mechanism to represent variables just yet, I’m using the node’s text as the key to our builtins table.

        2024-07-25
        • heck, I’m not even validating that this is an identifier - so you can technically do something like this, too:

          ((what the fuck) lol)
          

          which will call the builtin named (what the fuck).

          2024-07-25
    • we could try this out now, except we don’t actually have any builtins! so I’ll add a few in, so that we can finally perform our glorious arithmetic:

      function arithmeticBuiltin(op) { return (state, node) => { if (node.children.length < 3) throw new Error("arithmetic operations require at least two arguments"); let result = treewalk.eval(state, node.children[1]); for (let i = 2; i < node.children.length; ++i) { result = op(result, treewalk.eval(state, node.children[i])); } return result; }; } builtins["+"] = arithmeticBuiltin((a, b) => a + b); builtins["-"] = arithmeticBuiltin((a, b) => a - b); builtins["*"] = arithmeticBuiltin((a, b) => a * b); builtins["/"] = arithmeticBuiltin((a, b) => a / b);
      2024-07-25
      • one thing of note is how arithmeticBuiltin accepts two or more arguments. you’re free to pass in more than that, which is common among Lisps.

        2024-07-25
    • now let’s try running our full arithmetic expression! drum roll please…

      printEvalResult("(+ (* 2 1) 1 (/ 6 2) (- 10 3))");
      13
      
      2024-07-25
    • a brief intermission

      2024-07-25
      • I will now pause here to say, I’m kind of tired of writing this printEvalResult ceremony over and over again. so I took a bit of time to enhance the treehouse’s capabilities, and it’s now capable of running languages other than JavaScript!

        2024-07-25
      • all we have to do is swap out the evaluation kernel…

        import { getKernel, defaultEvalModule } from "treehouse/components/literate-programming/eval.js"; export const kernel = getKernel(); kernel.evalModule = async (state, source, language, params) => { if (language == "haku") { printEvalResult(source); return true; } else { return await defaultEvalModule(state, source, language, params); } };
        2024-07-25
      • and now we can write haku in code blocks!

        (+ (* 2 1) 1 (/ 6 2) (- 10 3))
        13
        
        2024-07-25
    • anyways, it’s time to turn haku into a real programming language!

      2024-07-26
      • programming languages as we use them in the real world are Turing-complete - roughly speaking, a language is Turing-complete if it can simulate a Turing machine.

        2024-07-26
        • this is not an accurate definition at all - for that, I strongly suggest reading the appropriate Wikipedia articles.

          2024-07-26
        • the TL;DR is that conditional loops are all you really need for Turing-completeness.

          2024-07-26
      • there exist two main models for modeling Turing-complete abstract machines: Turing machines, and lambda calculus.

        2024-07-26
        • Turing machines are the core of imperative programming languages - a Turing machine basically just models a state machine. similar to what you may find in a modern processor.

          2024-07-26
        • lambda calculus on the other hand is a declarative system, a skinned down version of math if you will. an expression in lambda calculus computes a result, and that’s it. no states, no side effects. just like functional programming.

          2024-07-26
          • which is why we’ll use it for haku!

            2024-07-26
      • at the core of lambda calculus is the lambda - yes, that one from your favorite programming language! there are a few operations we can do on lambdas.

        2024-07-26
        • first of all, a lambda is a function which takes one argument, and produces one result - both of which can be other lambdas.

          in haku, we will write down lambdas like so:

          (fn (a) r)
          

          where a is the name of the argument, and r is the resulting expression.

          2024-07-26
          • in fact, haku will extend this idea by permitting multiple arguments.

            (fn (a b c) r)
            
            2024-07-26
        • a lambda can be applied, which basically corresponds to a function call in your favorite programming language.

          we write application down like so:

          (f x)
          

          where f is any expression producing a lambda, and x is the argument to pass to that lambda.

          2024-07-26
        • what’s also important is that nested lambdas capture their outer lambdas’ arguments! so the result of this:

          (((fn (x) (fn (y) (+ x y))) 1) 2)
          

          is 3.

          2024-07-26
      • this is by no means a formal explanation, just my intuition as to how it works. formal definitions don’t really matter for us anyways, since we’re just curious little cats playing around with computer cowboy

        2024-07-26
    • we’ll start out with a way to define variables. variables generally have scope - look at the following JavaScript, for example:

      let x = 0; console.log(x); { let x = 1; console.log(x); } console.log(x);
      0
      1
      0
      
      2024-07-26
    • the same thing happens in haku (though we don’t have a runtime for this yet, so you’ll have to take my word for it.)

      ((fn (x)
          ((fn (x)
              x)
           2))
       1)
      

      this is perfectly fine, and the result should be 2 - not 1!

      try evaluating this in your head, and you’ll see what I mean. it’s better than me telling you all about it.

      2024-07-26
    • so to represent scope, we’ll introduce a new variable to our interpreter’s state.

      treewalk.init = (input) => { return { input, scopes: [new Map(Object.entries(builtins))] }; };

      scopes will be a stack of Maps, each representing a single scope.

      our builtins will now live at the bottom of all scopes, as an ever-present scope living in the background.

      2024-07-26
    • variable lookup will be performed by walking the scope stack from top to bottom, until we find a variable that matches.

      treewalk.lookupVariable = (state, name) => { for (let i = state.scopes.length; i-- > 0; ) { let scope = state.scopes[i]; if (scope.has(name)) { return scope.get(name); } } throw new Error(`variable ${name} is undefined`); };

      we’re stricter than JavaScript here and will error out on any variables that are not defined.

      2024-07-26
    • now we can go ahead and add variable lookups to our eval function! we’ll also go ahead and replace our bodged-in builtin support with proper evaluation of the first list element.

      in most cases, such as (+ 1 2), this will result in a variable lookup.

      treewalk.eval = (state, node) => { switch (node.kind) { case "integer": let sourceString = state.input.substring(node.start, node.end); return parseInt(sourceString); case "identifier": return treewalk.lookupVariable(state, state.input.substring(node.start, node.end)); case "list": let functionToCall = treewalk.eval(state, node.children[0]); return functionToCall(state, node); default: throw new Error(`unhandled node kind: ${node.kind}`); } };
      2024-07-26
    • if we didn’t screw anything up, we should still be getting 13 here:

      (+ (* 2 1) 1 (/ 6 2) (- 10 3))
      13
      

      looks like all’s working correctly!

      2024-07-26
    • time to build our fn builtin.

      we’ll split the work into two functions: the actual builtin, which will parse the node’s structure into some useful variables…

      builtins.fn = (state, node) => { if (node.children.length != 3) throw new Error("an `fn` must have an argument list and a result expression"); let params = node.children[1]; if (node.children[1].kind != "list") throw new Error("expected parameter list as second argument to `fn`"); let paramNames = []; for (let param of params.children) { if (param.kind != "identifier") { throw new Error("`fn` parameters must be identifiers"); } paramNames.push(state.input.substring(param.start, param.end)); } let expr = node.children[2]; return makeFunction(state, paramNames, expr); };
      2024-07-26
    • and makeFunction, which will take that data, and assemble it into a function that follows our (state, node) => result calling convention.

      export function makeFunction(state, paramNames, bodyExpr) { return (state, node) => { if (node.children.length != paramNames.length + 1) throw new Error( `incorrect number of arguments: expected ${paramNames.length}, but got ${node.children.length - 1}`, ); let scope = new Map(); for (let i = 0; i < paramNames.length; ++i) { scope.set(paramNames[i], treewalk.eval(state, node.children[i + 1])); } state.scopes.push(scope); let result = treewalk.eval(state, bodyExpr); state.scopes.pop(); return result; }; }
      2024-07-26
    • now let’s try using that new fn builtin!

      ((fn (a b) (+ a b)) 1 2)
      3
      

      nice!

      2024-07-26
    • but, remember that lambdas are supposed to capture their outer variables! I wonder if that works.

      ((fn (f) ((f 1) 2)) (fn (x) (fn (y) (+ x y))))
      Error: variable x is undefined
      

      …I was being sarcastic here of course, of course it doesn’t work. ralsei_dead

      2024-07-26
    • so to add support for that, we’ll clone the entire scope stack into the closure, and then restore it when necessary.

      export function makeFunction(state, paramNames, bodyExpr) { let capturedScopes = []; // Start from 1 to skip builtins, which are always present anyways. for (let i = 1; i < state.scopes.length; ++i) { // We don't really mutate the scopes after pushing them onto the stack, so keeping // references to them is okay. capturedScopes.push(state.scopes[i]); } return (state, node) => { if (node.children.length != paramNames.length + 1) throw new Error( `incorrect number of arguments: expected ${paramNames.length}, but got ${node.children.length - 1}`, ); let scope = new Map(); for (let i = 0; i < paramNames.length; ++i) { scope.set(paramNames[i], treewalk.eval(state, node.children[i + 1])); } state.scopes.push(...capturedScopes); // <-- state.scopes.push(scope); let result = treewalk.eval(state, bodyExpr); state.scopes.pop(); return result; }; }

      with that, our program now works correctly:

      ((fn (f) ((f 1) 2)) (fn (x) (fn (y) (+ x y))))
      3
      
      2024-07-26
    • being able to define arbitrary functions gives us some pretty neat powers! to test this out, let’s write a little program that will calculate Fibonacci numbers.

      2024-07-30
      • there are a couple ways to write a number to calculate numbers in the Fibonacci sequence.

        2024-07-30
      • the most basic is the recursive way, which is really quite simple to do:

        function fib(n) { if (n < 2) { return n; } else { return fib(n - 1) + fib(n - 2); } } console.log(fib(10));
        55
        

        the downside is that it’s really inefficient! we end up wasting a lot of time doing repeat calculations. try going through it yourself and see just how many calculations are repeated!

        2024-07-30
      • the one that’s more efficient is the iterative version:

        function fib(n) { let a = 0; let b = 1; let t = null; for (let i = 0; i < n; ++i) { t = a; a = b; b += t; } return a; } console.log(fib(10));
        55
        
        2024-07-30
    • in either, you will notice we need to support comparisons to know when to stop iterating! so let’s add those into our builtins:

      function comparisonBuiltin(op) { return (state, node) => { if (node.children.length != 3) throw new Error("comparison operators require exactly two arguments"); let a = treewalk.eval(state, node.children[1]); let b = treewalk.eval(state, node.children[2]); return op(a, b) ? 1 : 0; }; } builtins["="] = comparisonBuiltin((a, b) => a === b); builtins["<"] = comparisonBuiltin((a, b) => a < b);

      it’s easy enough to !=, <=, >, and >= from these, so we won’t bother adding those in for now.

      2024-07-30
      • if you’re curious how to derive != and <=, consider that we’re returning zeros and ones, so we can do an AND operation by multiplying them.

        2024-07-30
      • > can be derived by reversing the arguments of <.

        2024-07-30
    • of course, we’ll also need an if to be able to branch on the result of our comparison operators.

      builtins["if"] = (state, node) => { if (node.children.length != 4) throw new Error("an `if` must have a condition, true expression, and false expression"); let condition = treewalk.eval(state, node.children[1]); if (condition !== 0) { return treewalk.eval(state, node.children[2]); } else { return treewalk.eval(state, node.children[3]); } };
      2024-07-30
    • now we can write ourselves a recursive Fibonacci!

      ((fn (fib) (fib fib 10)) ; fib (fn (fib n) (if (< n 2) n (+ (fib fib (- n 1)) (fib fib (- n 2))))))

      note that in order to achieve recursion, we need to pass fib into itself - this is because the fib variable we’re binding into the first function is not visible in the second function.

      but if we run it now:

      55
      

      we can see it works just as fine as the JavaScript version!

      2024-07-30
  • rememeber to remember

    2024-07-30
    • now, you might be wondering why I’m cutting our Fibonacci adventures short. after all, we’re only just getting started?

      2024-07-30
      • thing is, I really want to build something bigger. and one expression per code block’s not gonna cut it.

        2024-07-30
      • I’d like to start building a little library of utilities for writing haku code, but I have no way of saving these utilities for later!

        2024-07-30
    • therefore, it’s time for… a persistent environment!

      2024-07-30
    • once again, let me sketch out what I’d like it to look like. to declare a persistent value, you use def:

      (def fib
          (fn (n)
              (if (< n 2)
                  n
                  (+ (fib (- n 1)) (fib (- n 2))))))
      

      if this looks familar, that’s because it probably is - I used the exact same example at the start of the post!

      2024-07-30
    • once you define a persistent value, you can refer to it as usual. persistent values will sit in a scope above builtins, so you will be able to shadow those if you want to (but please don’t.)

      (def fn if) ; Whoops! Guess your soul belongs to me now
      
      2024-07-30
    • of course, values will persist across code blocks, so I’d be able to refer to fib here as well:

      (fib 12)
      
      2024-07-30
    • and lastly, it’ll be possible to put multiple expressions in a code block. we’ll only treat the last one as the result.

      (def x 1)
      (def y 2)
      (def z (+ x y))
      
      2024-07-30
    • so let’s start by implementing the easiest part - the def builtin. we’ll need to augment our interpreter state once again, this time with the persistent environment:

      treewalk.init = (env, input) => { return { input, scopes: [new Map(Object.entries(builtins)), env], env, }; };
      2024-07-30
    • of course now we will also need to teach our whole runtime about the environment, right down to the kernel…

      import { defaultEvalModule } from "treehouse/components/literate-programming/eval.js"; export function run(env, input, node) { let state = treewalk.init(env, input); return treewalk.eval(state, node); } export function printEvalResult(env, input) { try { let tokens = lex(input); let ast = parse(tokens); let result = run(env, input, ast); // NOTE: `def` will not return any value, so we'll skip printing it out. if (result !== undefined) { console.log(result); } } catch (error) { console.log(error.toString()); } } kernel.evalModule = async (state, source, language, params) => { if (language == "haku") { state.haku ??= { env: new Map() }; printEvalResult(state.haku.env, source); return true; } else { return await defaultEvalModule(state, source, language, params); } };
      2024-07-30
    • now for def - it’ll take the value on the right and insert it into env, so that it can be seen in the future.

      builtins.def = (state, node) => { if (node.children.length != 3) throw new Error( "a `def` expects the name of the variable to assign, and the value to assign to the variable", ); if (node.children[1].kind != "identifier") throw new Error("variable name must be an identifier"); let name = node.children[1]; let value = treewalk.eval(state, node.children[2]); state.env.set(state.input.substring(name.start, name.end), value); };
      2024-07-30
    • now let’s test it out!

      (def x 1) (+ x 1)
      2
      

      seems to be working!

      2024-07-30
    • now for the second part: we still want to permit multiple declarations per block of code, but currently our syntax doesn’t handle that:

      (def x 1) (def y 2)
      Error: unhandled node kind: error
      

      and by the way, I know this is a terrible error message. we’ll return to that later.

      2024-07-30
    • this is a pretty simple augmentation to the base syntax. instead of reading a single expression, we will read a toplevel - as many expressions as possible until we hit end of file.

      parser.parseToplevel = (state) => { let children = []; while (parser.current(state).kind != eof) { children.push(parser.parseExpr(state)); } return { kind: "toplevel", children, // Don't bother with start..end for now. }; }; parser.parseRoot = (state) => parser.parseToplevel(state);
      2024-07-30
      • I’m stealing the name toplevel from OCaml. the name file didn’t quite seem right, since a haku program is not really made out of files, but is rather a long sequence of code blocks.

        2024-07-30
    • with a toplevel node ready, we can now handle it in our interpreter:

      treewalk.eval = (state, node) => { switch (node.kind) { case "integer": let sourceString = state.input.substring(node.start, node.end); return parseInt(sourceString); case "identifier": return treewalk.lookupVariable(state, state.input.substring(node.start, node.end)); case "list": { let functionToCall = treewalk.eval(state, node.children[0]); let result = functionToCall(state, node); return result; } case "toplevel": let result = undefined; for (let i = 0; i < node.children.length; ++i) { result = treewalk.eval(state, node.children[i]); if (result !== undefined && i != node.children.length - 1) throw new Error(`expression ${i + 1} had a result despite not being the last`); } return result; default: throw new Error(`unhandled node kind: ${node.kind}`); } };
      2024-07-30
      • since eval (and likewise, a treehouse code block) is only allowed to have one result, we disallow any results other than the first one.

        2024-07-30
    • and with that…

      (def x 1) (def y 2) (+ x y)
      3
      

      we can now declare multiple, persistent values per code block!

      2024-07-30
  • but it’s never that easy is it

    2024-07-30
    • so let’s declare a little function to add some numbers together…

      (def add-two (fn (x) (+ x 2))) (add-two 1)
      Error: variable  is undefined
      

      ’scuse me??

      2024-07-30
    • not gonna lie, this one took me a while to figure out! but recall the structure of our AST nodes. it looks something like this:

      {
          "kind": "identifier",
          "start": 30,
          "end": 32
      }
      
      2024-07-30
      • now remember what we do in order to look up variables.

        return treewalk.lookupVariable(state, state.input.substring(node.start, node.end));
        

        what do you imagine happens when the state.input source string is different?

        2024-07-30
        • and, the source string does end up being different, because we end up parsing each block from scratch - we never concatenate them into something bigger!

          2024-07-30
    • so we’ll have to fix this up by remembering the source string alongside each node somehow. I see two paths:

      2024-07-30
      • pre-slice the source string into each node

        2024-07-30
      • store a reference to the entire source string in each node

        2024-07-30
    • I’m no JavaScript optimization expert, but the 2nd option seems like it would avoid a bit of overhead… but I really do like the fact our AST can be neatly printed into readable JSON, so to preserve that property, we’ll go with the 1st option.

      2024-07-30
      • speed isn’t really our main concern with this first iteration of the interpreter - I prefer inspectability and easy prototyping.

        2024-07-30
    • we’ll write a function that walks over our AST, and inserts source strings into it.

      export function insertSources(node, input) { if (node.start != null) { node.source = input.substring(node.start, node.end); } if (node.children != null) { for (let child of node.children) { insertSources(child, input); } } }
      2024-07-30
      • now I am aware this is changing object shapes quite a lot, which is suboptimal. but I would really like to keep the interpreter simple, so bear with me.

        2024-07-30
    • now we can patch the relevant parts of the interpreter to read from the node.source field, instead of substringing the source string passed to the interpreter. this is pretty mechanical so I’ll just dump all the relevant code here:

      treewalk.eval = (state, node) => { switch (node.kind) { case "integer": return parseInt(node.source); // <-- case "identifier": return treewalk.lookupVariable(state, node.source); // <-- case "list": let functionToCall = treewalk.eval(state, node.children[0]); return functionToCall(state, node); case "toplevel": let result = undefined; for (let i = 0; i < node.children.length; ++i) { result = treewalk.eval(state, node.children[i]); if (result !== undefined && i != node.children.length - 1) throw new Error(`expression ${i + 1} had a result despite not being the last`); } return result; default: throw new Error(`unhandled node kind: ${node.kind}`); } }; builtins.fn = (state, node) => { if (node.children.length != 3) throw new Error("an `fn` must have an argument list and a result expression"); let params = node.children[1]; if (node.children[1].kind != "list") throw new Error("expected parameter list as second argument to `fn`"); let paramNames = []; for (let param of params.children) { if (param.kind != "identifier") { throw new Error("`fn` parameters must be identifiers"); } paramNames.push(param.source); // <-- } let expr = node.children[2]; return makeFunction(state, paramNames, expr); }; builtins.def = (state, node) => { if (node.children.length != 3) throw new Error( "a `def` expects the name of the variable to assign, and the value to assign to the variable", ); if (node.children[1].kind != "identifier") throw new Error("variable name must be an identifier"); let name = node.children[1]; let value = treewalk.eval(state, node.children[2]); state.env.set(name.source, value); // <-- };
      2024-07-30
    • and of course, to top it all off, we still need to insert source information into the nodes before evaluating our tree:

      import { defaultEvalModule } from "treehouse/components/literate-programming/eval.js"; export function printEvalResult(env, input) { try { let tokens = lex(input); let ast = parse(tokens); insertSources(ast, input); // <-- let result = run(env, input, ast); // NOTE: `def` will not return any value, so we'll skip printing it out. if (result !== undefined) { console.log(result); } } catch (error) { console.log(error.toString()); } } kernel.evalModule = async (state, source, language, params) => { if (language == "haku") { state.haku ??= { env: new Map() }; printEvalResult(state.haku.env, source); return true; } else { return await defaultEvalModule(state, source, language, params); } };
      2024-07-30
    • let’s see if add-two works now. we have an outdated version of it in our env map, so let’s declare it again, using two input blocks like we did before:

      (def add-two (fn (x) (+ x 2))) (add-two 2)
      4
      

      cool!

      2024-07-30
  • data structures

    2024-07-30
    • for a language to really be useful, it needs to have data structures. fortunately we already have them at our disposal - enter linked lists!

      2024-07-30
    • the coolest part about lists is that we don’t even need to do anything on the JavaScript side to implement them - we can use our good old friend Lambda calculus, along with a really cool tool called Church encoding, which allows us to encode lists using nothing but functions!

      2024-07-30
      • haku also has some tricks up its sleeve which allows us to break free from the minimalistic confines of Lambda calculus, which means we don’t have to implement everything. without further ado though, let’s get started!

        2024-07-30
    • first, we’ll implement a way to construct a linked list node - aka cons.

      (def clist/cons (fn (h t) (fn (get) (get h t))))
      2024-07-30
      • the way our lists will work is that each list node is an ordinary function. we’ll be able to pass a “getter” function to the list function to obtain the list’s head and tail.

        2024-07-30
      • I’m prefixing all of our Church-encoded list operations with clist/ to differentiate them from potential future list representations we’d want to implement.

        2024-07-30
    • now for extracting our head and tail.

      (def clist/head (fn (list) (list (fn (h t) h)))) (def clist/tail (fn (list) (list (fn (h t) t))))

      these happen by passing that getter function to our list and using it to extract its head or tail only.

      2024-07-30
    • the last missing part is a marker for signifying the end of the list.

      thing is, we don’t really have to implement this, because we already have the literal 0! so knowing whether we’re at the end of the list is as simple as (= (clist/tail node) 0).

      2024-07-30
    • and that’s our list representation! let’s give it a shot.

      we’ll define a list containing a bunch of the first five Fibonacci numbers:

      (def clist-with-fib-5 (clist/cons 1 (clist/cons 1 (clist/cons 2 (clist/cons 3 (clist/cons 5 0))))))
      2024-07-30
    • and a function to reduce a list to a single element. this function has various names in various languages, but the idea is that it allows us to walk over a list, modifying a value along the way, until we get a single, final value.

      (def clist/reduce (fn (init op list) (if (= (clist/tail list) 0) (op init (clist/head list)) (clist/reduce (op init (clist/head list)) op (clist/tail list)))))

      once again, the recursive logic is kind of tricky; if you draw it out, you should be able to understand it much easier!

      2024-07-30
    • let’s see if we can sum our Fibonacci numbers together:

      (clist/reduce 0 + clist-with-fib-5)
      12
      

      nice!

      2024-07-30
    • can I just say something real quick

      2024-07-30
      • I’m swiftly starting to dislike my parenthesized syntax choices here. they would be fine in an editor capable of highlighting mismatched parentheses, but Helix refuses to highlight any parentheses in .tree files until I add a tree-sitter grammar to it.

        2024-07-30
        • the example above took me way too long to get working than I want to admit. honestly it’s a failure of tooling on my side, (should’ve embedded source spans into all these errors so that they can be reported more cleanly!) but I really don’t want to spend too much time on what’s basically just a prototype.

          2024-07-30
      • I’ll carry on with them for a bit longer though, I really don’t wanna write a complicated parser right now.

        2024-07-30
  • string manipulation

    2024-08-05
    • being able to calculate numbers is good, but remember how we can only output one number - that kinda sucks!

      2024-08-05
    • therefore I’d like to be able to work on strings in haku. that way we’ll be able to implement more interesting programs, including my personal favorite - the Mandelbrot set!

      2024-08-05
    • to start things off, we’ll implement a basic syntax for strings. technically speaking we don’t really need string syntax, but it’s gonna make our code examples a lot nicer if we have it!

      2024-08-05
    • I find the modern orthodox string syntax just fine: "text between quotation marks."

      2024-08-05
      • we’ll only support double quotation marks to keep the syntax lean.

        2024-08-05
      • there’ll be no escape sequences for the time being - they inflate the surface area of the lexer quite a lot, and aren’t needed when you can store a bunch of magic strings in variables:

        (cat                          ; `cat` will be our string concatenation function
            "Hello, world!" \n        ; notice how \n is just a regular variable!
            "This is another line.")
        
        2024-08-05
      • stealing the idea from Zig, I’m making strings single-line only. if you want a multiline string, look at the example above.

        2024-08-05
    • let’s add that to the lexer then!

      lexer.string = (state) => { lexer.advance(state); // skip over initial quotation mark while (lexer.current(state) != '"') { if (lexer.current(state) == eof) { return "error"; } lexer.advance(state); } lexer.advance(state); // skip over closing quotation mark return "string"; }; lexer.nextToken = (state) => { let c = lexer.current(state); if (c == '"') { return lexer.string(state); // <-- } if (isDigit(c)) { lexer.advanceWhile(state, isDigit); return "integer"; } if (isIdentifier(c)) { lexer.advanceWhile(state, isIdentifier); return "identifier"; } if (c == "(" || c == ")") { lexer.advance(state); return c; } if (c == eof) return eof; lexer.advance(state); return "error"; };
      2024-08-05
    • and to the parser…

      parser.parseExpr = (state) => { let token = parser.current(state); switch (token.kind) { case "integer": case "identifier": case "string": // <-- parser.advance(state); return { ...token }; case "(": return parser.parseList(state, token); default: parser.advance(state); return { kind: "error", error: "unexpected token", start: token.start, end: token.end, }; } };
      2024-08-05
    • now if we use a string, we should get an error from the interpreter about an unknown node kind:

      "hello!"
      Error: unhandled node kind: string
      
      2024-08-05
    • so to top this all off, we’ll make the interpreter produce a string value for string literals, like it produces numbers for integer literals:

      treewalk.eval = (state, node) => { switch (node.kind) { case "integer": return parseInt(node.source); case "string": // <-- // NOTE: We chop the quotes off of the string literal here. return node.source.substring(1, node.source.length - 1); case "identifier": return treewalk.lookupVariable(state, node.source); case "list": let functionToCall = treewalk.eval(state, node.children[0]); return functionToCall(state, node); case "toplevel": let result = undefined; for (let i = 0; i < node.children.length; ++i) { result = treewalk.eval(state, node.children[i]); if (result !== undefined && i != node.children.length - 1) throw new Error(`expression ${i + 1} had a result despite not being the last`); } return result; default: throw new Error(`unhandled node kind: ${node.kind}`); } }; "hello!"
      hello!
      
      2024-08-05
    • with strings added to the language, we’ll need some basic operations to put together and break apart strings.

      2024-08-05
      • I don’t want to write a whole load of wrapper code every single time we want to declare a simple JavaScript function that’s available from haku, so let’s build a helper for that first:

        export function wrapJavaScriptFunctionVarargs(func) { return (state, node) => { let args = Array(node.children.length - 1); for (let i = 1; i < node.children.length; ++i) { args[i - 1] = treewalk.eval(state, node.children[i]); } return func(...args); }; } export function wrapJavaScriptFunction(func) { let inner = wrapJavaScriptFunctionVarargs(func); return (state, node) => { if (node.children.length != func.length + 1) throw new Error( `\`${func.name}\` expects ${func.length} arguments, but ${node.children.length - 1} were given`, ); return inner(state, node); }; }
        2024-08-05
      • with that, we’ll start off with cat, our concatenation function. mainly because I really like the name. =^..^=

        builtins.cat = wrapJavaScriptFunctionVarargs((...strings) => strings.join("")); (cat "hello, " "world!")
        hello, world!
        
        2024-08-05
      • then there’s also sub, for indexing. one thing that always seemed kind of arbitrary is that substring in JavaScript and other languages defaults its argument to the string’s length; I personally think if the second argument is not provided, you almost always want to just get the character at that index.

        builtins.sub = wrapJavaScriptFunctionVarargs((str, start, end) => { if (typeof str != "string") throw new Error("`sub` expects a string as the first argument"); if (typeof start != "number") throw new Error("`sub` expects a number as the second argument"); end ??= start + 1; return str.substring(start, end); }); (sub "hello, world!" 0)
        h
        
        (sub "hello, world!" 0 5)
        hello
        
        2024-08-05
      • then of course to be able to look at a suffix of the string, we’ll need its len.

        builtins.len = wrapJavaScriptFunction((string) => string.length); (len "hello, world!")
        13
        
        2024-08-05
      • then we’ll also have a pair of functions that will convert between Unicode code points and strings.

        builtins.chr = wrapJavaScriptFunction((n) => String.fromCodePoint(n)); builtins.ord = wrapJavaScriptFunction((s) => s.codePointAt(0)); (chr 33)
        !
        
        (ord "!")
        33
        
        2024-08-05
      • and last but not least, a pair of functions to convert between numbers and strings.

        builtins["to-string"] = wrapJavaScriptFunction((n) => n.toString()); builtins["to-number"] = wrapJavaScriptFunction((s) => parseInt(s)); (cat "today's magic number is: " (to-string 65))
        today's magic number is: 65
        
        (+ (to-number "60") 5)
        65
        

        we won’t care about the fallibility of to-number for now; we’re hand-waving away all the error handling anyways.

        2024-08-05
  • conclusion for now

    2024-08-05
    • at this point I’ve experimented and thought about it enough that I now know just the perfect application for such a language, so I’ll be wrapping up here. maybe I’ll revisit haku some other time, but for now… it’s time to call it done.

      2024-08-05
      • I’ll be continuing to develop haku in another project, simply as “haku.” this version will be called “haku 0,” the progenitor of the final language.

        2024-08-05
    • ; Here's a blank canvas for you to play around with! (cat "hello" (chr 44) " world!")
      hello, world!
      
      2024-08-05