notes

1. VC infrastructure

In heptapod we have a root group named comp, containg a variety of subgroups. Some of these groups should be public, while others are internal to comp members exclusively. Within each subgroup, we should have the root group members automatically granted privileged access to projects. This is relevant for the startup subgroup in particular, where each project is potentially maintained by multiple non-root contributors.

We also need to consider how we will manage subrepos across the organization. It is about time we start integrating HG bundles and potentially mirrors. For our core VC pipeline we should have no reliance on Git, but this may be difficult. It depends on the behavior of HG bundles.

Bookmarks/tags should be used for milestones in the root group and are infrequent. They are more frequent in projects with a regular release life-cycle.

2. Approaching Webapps

I started poking around in the webapp space again so that I can launch a landing page for NAS-T quickly. The Rust situation has improved somewhat on the frontend side, and the axum backend stack is nice.

This might seem like a lot of Rust and not a lot of Lisp, which it is, but there's still room for Lisp wherever we need it. It mostly plays a role in the backend, servicing the database and responding to requests from the Rust edges. All of the important tests for the web APIs are also written in Lisp. We will almost certainly use Lisp for all static processing and HTML generation at compile-time.

This I believe, is the appropriate way to integrate Lisp into a cutting-edge web-app. You get the good parts of Lisp where you need them (interactive debugging, dynamic language, REPL) and avoid the bad parts (OOB optimization, RPS performance) in areas where the customer would be impacted. In this domain, Lisp takes the form of a glue rather than the bricks and mortar it sometimes appears to us as.

3. virt

3.1. QEMU

3.2. KVM

3.3. Hyper-V

3.4. Firecracker

3.5. Docker

3.6. Vagrant

3.7. LXC

3.8. LXD

3.9. containerd

3.10. systemd-nspawn

3.11. VirtualBox

4. Concatenative

4.1. Factor   factor

  • [2023-07-04 Tue] Factor is a cool concatenative lang but unfortunately the C interface (vm/master.h) no longer exists on the master branch.

5. Lisp   lisp

These notes pertain to Lisp. More specifically, ANSI Common Lisp in most places.

6. Rust

6.1. Serde

  • [2023-07-05 Wed]
    important part of the Rust ecosystem, another dtolnay contribution. If you want to program a data format in the Rust ecosystem, this is how you do it.

    The way it works is that you define some special structs, a Serializer and a Deserializer which implement the Serialize and Deserialize traits provided by serde, respectively.

    You can use these structs to provide your public API. The conventional choice is public top-level functions like from-str and to-string. That's it, your serialization library can now read and write your data format as Rust data types.

    enum-representations

    • the default behavior is an externally tagged representation (verbose)

    The docs use strings as core IO when implementing a custom format, but the convention is to implement for T where T is bound by std::io Read or Write trait. Then you can provide a more robust public API (from_bytes, from_writer, etc).

7. C

8. CPP

9. Nu

10. AWS usage

We're leveraging AWS for some of our public web servers for now. It's really not realistic to expect that my home desktop and spotty Comcast internet can serve any production workflow. What it is capable of is a private VPN, which can communicate with AWS and other cloud VPN depots via WireGuard (article).

I currently use Google Domains for nas-t.net, otom8.dev, and rwest.io - but that business is now owned by squarespace, so I would rather move it to Route53.

We have archlinux ec2 image builds here and here - only half work and not maintained, but it's a start. I'm not even sure if I should stick with arch or cave and use Ubuntu or AWS Linux. We can serve the static services with little cost, the only big spender will be the heptapod instance which requires a larger instance and some workers.

We'll try to keep the cost at or around $30/month.

11. IDEAS

11.1. TODO shed

ID: fc9a94e1-91c5-4915-90b8-73218fa3b8bc
  • State "TODO" from [2023-04-07 Fri 23:24]

rlib > ulib > ulib > ulib > ulib

11.1.1. TODO sh* tools

ID: c0613a13-7ccb-4af9-b47e-e14a41c782c2
  • State "TODO" from "TODO" [2023-04-07 Fri 23:22]

shc,shx,etc

11.2. WIP packy

  • State "TODO" from [2023-04-07 Fri 23:33]

11.2.1. WIP rust

11.2.2. WIP common-lisp

11.2.3. WIP emacs-lisp

11.2.4. python

11.2.5. julia

11.2.6. C

11.2.7. C++

11.3. TODO tenex

  • State "TODO" from [2023-04-07 Fri 23:52]

11.4. TODO mpk

  • State "TODO" from [2023-04-07 Fri 23:52]

11.5. TODO cfg

  • State "TODO" from [2023-04-07 Fri 23:34]

11.6. TODO obj

  • State "TODO" from [2023-04-07 Fri 23:51]

split out from rlib to separate package

  • a purely OOP class library

11.7. TODO lab

  • State "TODO" from [2023-04-07 Fri 23:34]

11.8. TODO source categories

  • need a way of extracting metadata from a repo
  • need ability to search and query libs/packages
  • separate modules based on where they belong in our stack?
    • app
    • lib
    • script?
    • dist
      • software distros

11.9. TODO generic query language

from obj protocol? sql compatibility?

check out kdb

11.10. TODO bbdb

  • Note taken on [2023-10-24 Tue 22:16]
    graph database, build on rocksdb

insidious Big Brother database.

  • an application built with obj
  • sql

11.11. TODO NAS-TV   nas t

  • media streaming
  • gstreamer backend
  • audio/video

12. DRAFT dylib-skel-1

  • State "DRAFT" from [2023-11-05 Sun 22:23]

12.1. Overview

Our core languages are Rust and Lisp - this is the killer combo which will allow NAS-T to rapidly develop high-quality software. As such, it's crucial that these two very different languages (i.e. compilers) are able to interoperate seamlessly.

Some interop methods are easy to accomodate via the OS - such as IPC or data sharing, but others are a bit more difficult.

In this 2-part series we'll build a FFI bridge between Rust and Lisp, which is something that can be difficult, due to some complications with Rust and because this is not the most popular software stack (yet ;). This is an experiment and may not make it to our code-base, but it's definitely something worth adding to the toolbox in case we need it.

12.2. FFI

The level of interop we're after in this case is FFI.

Basically, calling Rust code from Lisp and vice-versa. There's an article about calling Rust from Common Lisp here which shows the basics and serves as a great starting point for those interested.

12.2.1. Rust != C

The complication(s) with Rust I mentioned early is really just that it is not C. C is old, i.e. well-supported with a stable ABI, making the process of creating bindings for a C library a breeze in many languages.

For a Rust library we need to first appease the compiler, as explained in this section of the Rustonomicon. Among other things it involves changing the calling-convention of functions with a type signature and editing the Cargo.toml file to produce a C-compatible ABI binary. The Rust default ABI is unstable and can't reliably be used like the C ABI can.

12.2.2. Overhead

Using FFI involves some overhead. Check here for an example benchmark across a few languages. While building the NAS-T core, I'm very much aware of this, and will need a few sanity benchmarks to make sure the cost doesn't outweigh the benefit. In particular, I'm concerned about crossing multiple language barriers (Rust<->C<->Lisp).

12.3. Rust -> C -> Lisp

12.3.1. Setup

For starters, I'm going to assume we all have Rust (via rustup) and Lisp (sbcl only) installed on our GNU/Linux system (some tweaks needed for Darwin/Windows, not covered in this post).

  1. Cargo

    Create a new library crate. For this example we're focusing on a 'skeleton' for dynamic libraries only, so our experiment will be called dylib-skel or dysk for short. cargo init dysk --lib && cd dysk

    A src/lib.rs will be generated for you. Go ahead and delete that. We're going to be making our own lib.rs file directly in the root directory (just to be cool).

    The next step is to edit your Cargo.toml file. Add these lines after the [package] section and before [dependencies]:

    [lib]
    crate-type = ["cdylib","rlib"]
    path = "lib.rs"
    [[bin]]
    name="dysk-test"
    path="test.rs"
    

    This tells Rust to generate a shared C-compatible object with a .so extension which we can open using dlopen.

  2. cbindgen
    1. install

      Next, we want the cbindgen program which we'll use to generate header files for C/C++. This step isn't necessary at all, we just want it for further experimentation.

      cargo install --force cbindgen

      We append the cbindgen crate as a build dependency to our Cargo.toml like so:

      [build-dependencies]
      cbindgen = "0.24"
      
    2. cbindgen.toml
      language = "C"
      autogen_warning = "/* Warning, this file is autogenerated by cbindgen. Don't modify this manually. */"
      include_version = true
      namespace = "dysk"
      cpp_compat = true
      after_includes = "#define DYSK_VERSION \"0.1.0\""
      line_length = 88
      tab_width = 2
      documentation = true
      documentation_style = "c99"
      usize_is_size_t = true
      [cython]
      header = '"dysk.h"'
      
    3. build.rs
      fn main() -> Result<(), cbindgen::Error> {
        if let Ok(b) = cbindgen::generate(std::env::var("CARGO_MANIFEST_DIR").unwrap()) {
          b.write_to_file("dysk.h"); Ok(())}
        else { panic!("failed to generate dysk.h from cbindgen.toml") } }
      

12.3.2. lib.rs

//! lib.rs --- dysk library
use std::ffi::{c_char, c_int, CString};
#[no_mangle]
pub extern "C" fn dysk_hello() -> *const c_char {
  CString::new("hello from rust").unwrap().into_raw()}
#[no_mangle]
pub extern "C" fn dysk_plus(a:c_int,b:c_int) -> c_int {a+b}
#[no_mangle]
pub extern "C" fn dysk_plus1(n:c_int) -> c_int {n+1}

12.3.3. test.rs

//! test.rs --- dysk test
fn main() { let mut i = 0u32; while i < 500000000 {i+=1; dysk::dysk_plus1(2 as core::ffi::c_int);}}

12.3.4. compile

cargo build --release

12.3.5. load from SBCL

(load-shared-object #P"target/release/libdysk.so")
(define-alien-routine dysk-hello c-string)
(define-alien-routine dysk-plus int (a int) (b int))
(define-alien-routine dysk-plus1 int (n int))
(dysk-hello) ;; => "hello from rust"

12.3.6. benchmark

time target/release/dysk-test
(time (dotimes (_ 500000000) (dysk-plus1 2)))

13. cl-dot examples

(defmethod cl-dot:graph-object-node ((graph (eql 'example)) (object cons))
  (make-instance 'cl-dot:node
                 :attributes '(:label "cell \\N"
                               :shape :box)))
(defmethod cl-dot:graph-object-points-to ((graph (eql 'example)) (object cons))
  (list (car object)
        (make-instance 'cl-dot:attributed
                       :object (cdr object)
                       :attributes '(:weight 3))))
;; Symbols
(defmethod cl-dot:graph-object-node ((graph (eql 'example)) (object symbol))
  (make-instance 'cl-dot:node
                 :attributes `(:label ,object
                               :shape :hexagon
                               :style :filled
                               :color :black
                               :fillcolor "#ccccff")))
(let* ((data '(a b c #1=(b z) c d #1#))
       (dgraph (cl-dot:generate-graph-from-roots 'example (list data)
                                                 '(:rankdir "LR" :layout "twopi" :labelloc "t"))))
  (cl-dot:dot-graph dgraph "test-lr.svg" :format #+nil :x11 :svg))
(let* ((data '(a b))
       (dgraph (cl-dot:generate-graph-from-roots 'example (list data)
                                                 '(:rankdir "LR"))))
          (cl-dot:print-graph dgraph))

14. global refs

need a way of indexing, referring to, and annotating objects such as URLs, docs, articles, source files, etc.

What is the best way to get this done?

15. doc best practices

16. On Computers

If you've met me in the past decade, you probably know that I am extremely passionate about computers. Let me first explain why.

On the most basic level computers are little (or big) machines that can be programmed to do things, or compute if we're being technical.1

They host and provide access to the Internet, which is a pretty big thing, but they do little things too like unlock your car door and tell your microwave to beep at you. They solve problems. Big or small.

They're also everywhere - which can be scary to think about, but ultimately helps propel us into the future.

There's something pretty cool about that - when you look at the essence of computation. There are endless quantities of these machines which follow the same basic rules and can be used to solve real problems.

16.1. The Programmer

Now, let us consider the programmer. They have power. real power. They understand the language of computers, can whisper to them in various dialects. It can be intimidating to witness until you realize how often the programmer says the wrong thing - a bug.

In reality, the programmer has a symbiotic relationship with computers. Good programmers understand this relationship well.

One day after I got my first job at a software company, I remember being on an all-hands meeting due to a client service outage. We had some management, our lead devs, product team, and one curious looking man who happened to be our lead IT consultant who had just joined. He was sitting up on a hotel bed, shirtless, vaping an e-cig, typing away in what I can only imagine was a shell prompt.

After several minutes he took a swig from a bottle of Coke and said "Node 6 is sick." then a few seconds later our services were restored. For the next hour on the call he explained what happened and why, but that particular phrase always stuck with me. He didn't say Node 6 was down, or had an expired cert - his diagnosis was that it was sick.

The more you work closely with computers, the more you start to think of them this way. You don't start screaming when the computer does the wrong thing, you figure out what's wrong and learn from it. With experience, you start to understand the different behaviors of the machines you work with. I like to call this Machine Empathy.

16.2. Programs

I already mentioned bugs - I write plenty of those, but usually I try to write programs. Programs to me are like poetry. I like to think they are for the computer too.

Just like computers, computer programs come in different shapes and sizes but in basic terms they are sets of instructions used to control a computer.

You can write programs to do anything - when I first started, my programs made music. The program was a means to an end. Over time, I started to see the program as something much more. I saw it as the music itself.

17. On Infra

Something that is missing from many organizations big or large, is an effective way to store and access information, even about their own org.

It can be difficult problem to solve - usually there's the official one, say Microsoft Sharepoint and then the list of unofficial sources which becomes tribal corporate hacker knowledge. Maybe the unofficial ones are more current, or are annotated nicely, but their very existence breaks the system. There's no longer a single source of truth.

My priority in this department is writing services which process and store information from a variety of sources in a distributed knowledge graph. The graph can later be queried to access information on-demand.

My idea of infrastructure is in fact to build my own Cloud. Needless to say I don't have an O365 subscription, and wherever possible I'll be relying on hardware I have physical access to. I'm not opposed to cloud services at large but based on principle I like to think we shouldn't be built on them.

18. https://cal-coop.gitlab.io/utena/utena-specification/main.pdf

from the author of cl-decentralise2. draft specification of a Maximalist Computing System.

19. public datasets

20. useful internals

sb-sys:*runtime-dlhandle*
sb-fasl:+fasl-file-version+
sb-fasl:+backend-fasl-file-implementation+
sb-debug:print-backtrace
sb-debug:map-backtrace
sb-pretty:pprint-dispatch-table
sb-lockless:
sb-ext:simd-pack
sb-walker:define-walker-template
sb-walker:macroexpand-all
sb-walker:walk-form
sb-kernel:empty-type
sb-kernel:*eval-calls*
sb-kernel:*gc-pin-code-pages*
sb-kernel:*restart-clusters*
sb-kernel:*save-lisp-clobbered-globals*
sb-kernel:*top-level-form-p*
sb-kernel:*universal-fun-type*
sb-kernel:*universal-type*
sb-kernel:*wild-type*
sb-kernel:+simd-pack-element-types+
(sb-vm:memory-usage)
(sb-vm:boxed-context-register)
(sb-vm:c-find-heap->arena)
(sb-vm:copy-number-to-heap)
(sb-vm:dump-arena-objects)
(sb-vm:fixnumize)
(sb-vm:rewind-arena)
(sb-vm:show-heap->arena)
(sb-vm:with/without-arena)
(sb-cltl2:{augment-environment,compiler-let,define-declaration,parse-macro})
(sb-cltl2:{declaration-information, variable-information, function-information})
sb-di:
sb-assem:
sb-md5:
sb-regalloc:
sb-disassem:

21. SigMF

Sharing sets of recorded signal data is an important part of science and engineering. It enables multiple parties to collaborate, is often a necessary part of reproducing scientific results (a requirement of scientific rigor), and enables sharing data with those who do not have direct access to the equipment required to capture it.

Unfortunately, these datasets have historically not been very portable, and there is not an agreed upon method of sharing metadata descriptions of the recorded data itself. This is the problem that SigMF solves.

By providing a standard way to describe data recordings, SigMF facilitates the sharing of data, prevents the "bitrot" of datasets wherein details of the capture are lost over time, and makes it possible for different tools to operate on the same dataset, thus enabling data portability between tools and workflows.

the-spec: https://github.com/sigmf/SigMF/blob/sigmf-v1.x/sigmf-spec.md

22. LibVOLK

Vector-Optimized Library of Kernels (simd)

23. /dev/fb*

framebuffers, used by fbgrab/fbcat program

24. ublk

https://github.com/ming1/ubdsrv goals: make problems smaller.

sections: why lisp?

  • doesn't need mentioning more and more

25. TODO taobench demo

  • State "TODO" from [2024-01-21 Sun 00:32]

https://github.com/audreyccheng/taobench - shouldn't have missed this :) obviously we need to implement this using core – in demo/bench/tao?

26. TODO clap completion for nushell

27. Dataframe scripting

28. Cloud Squatting

28.1. Google

28.2. Amazon

  • AWS Free Tier

28.3. Akamai

  • Linode Free Trial

28.4. Oracle

  • OCI Free Tier
    • always free: 2 x oracle autonomous DB
    • 2 x AMD Compute VMs
    • up to 4 x ARM Ampere A1 with 3k/cpu/hr and 18k/gb/h per month
    • block/object/archive storage
    • 30-day $300 credits

29. NOTE trash as block device

  • State "NOTE" from [2024-01-29 Mon 20:53]
  • State "NOTE" from [2024-01-29 Mon 20:53]

in nushell there is option for rm command to always use 'trash' - AFAIK the current approach is via a service (trashd).

An interesting experiment would be to designate a block device as 'trash' - may be possible to remove reliance on a service

may be an opportunity for ublk driver to shine - instead of /dev/null piping we need a driver for streaming a file to /dev/trash

30. NOTE compute power

  • State "NOTE" from [2024-01-29 Mon 16:28]
  • mostly x86_64 machines - currently 2 AWS EC2 instances, some podman containers, and our home beowulf server:
  • beowulf:
    • Zor
      • mid-size tower enclosed (Linux/Windows)
      • CPU
        • Intel Core i7-6700K
        • 4 @ 4.0
      • GPU
        • NVIDIA GeForce GTX 1060
        • 6GB
      • Storage
        • Samsung SSD 850: 232.9GB
        • Samsung SSD 850: 465.76GB
        • ST2000DM001-1ER1: 1.82TB
        • WDC WD80EAZZ-00B: 7.28TB
        • PSSD T7 Shield: 3.64TB
        • My Passport 0820: 1.36TB
      • RAM
        • 16GB (2*8) [64GB max]
        • DDR4
    • Jekyll
      • MacBook Pro 2019 (MacOS/Darwin)
      • CPU
        • Intel
        • 8 @
      • RAM
        • 32G DDR4
    • Hyde
      • Thinkpad
      • CPU
        • Intel
        • 4 @
      • RAM
        • 24G DDR3
    • Boris
      • Pinephone Pro
      • CPU
        • 64-bit 6-core 4x ARM Cortex A53 + 2x ARM Cortex A72
      • GPU
        • Mali T860MP4
      • RAM
        • 4GB LPDDR4
    • pi
      • Raspberry Pi 4 Model B
      • CPU
        • Cortex-A72 (ARM v8) 64-bit SoC
        • 4 @ 1.8GHz
      • RAM
        • 8 GB
        • DDR4 4200

31. BigBenches

let ms = '1trc/measurements-0.parquet'
dfr open $ms
| dfr group-by  station
| dfr agg [
  (dfr col measure | dfr min | dfr as "min")
  (dfr col measure | dfr max | dfr as "max")
  (dfr col measure | dfr sum | dfr as "sum")
  (dfr col measure | dfr count | dfr as "count")
]

32. NOTE WL vs X

  • State "NOTE" from [2024-02-18 Sun 11:55]

In the past few months there has been drama regarding Wayland vs X. It seems to be on everyone's minds after Artem's freakout issue and the follow up YT vids/comments.

I admit that it made me reconsider the fitness of WL as a whole - there was a github gist that made some scathing arguments against it.

It's an odd debate though. I think there are many misunderstandings.

So first off, if we look at the homepage https://wayland.freedesktop.org/, Wayland claims it is a replacement for X11. It now has manifest destiny, which in my opinion is a great shame.

X-pros seem to agree that Wayland has manifest destiny - like if you are building softwares that look remotely like a window system, it's a successor to X. That's the model of doing things and there's no way around it.

The disagreement starts with how this destiny - of an X2 - should be fulfilled. X-pros want a fork of X, but it's too late for that. WL-pros want X to run on top of Wayland compositor: https://wayland.freedesktop.org/xserver.html.

Xwayland is a problem for me. From the project description: 'if we're migrating away from X, it makes sense to have a good backwards compatibility story.' Full disclosure: I have never done significant work on Xwayland, so perhaps my opinion is unwarranted. But I have no intention of attempting to maintain a computer system that uses Wayland and X clients at the same time.

To me, X is ol' reliable. Every distro has first-class X support, and it runs on most systems with very little user intervention. Where it doesn't, there is 20+ years of dev history and battle-tested workarounds for you to find your solution in.

Wayland is the new kid on the block, born just in 2008. It's a fresh start to one of the most difficult challenges in software - window systems. A re-write would be pointless though, and so the real value-add is in design. Wayland is designed as a protocol and collection of libraries which are implemented in your own compositor. Coming from Lisp - with ANSI Common Lisp and SRFIs, this feels right even if the implementation is something very different (compositor vs compiler).

With X, it is assumed to be much harder to write an equivalent 'compositor'. Here's the thing though - with a significantly complex X client implementation, it is impossible to replicate in WL. This is really the crux of Artemi's argument in his issue. He asked for a 1:1 equivalent X/WL comparison when no such thing exists, and in my opinion it is a waste of time.

The WL core team is fully aware of this dichotomy, but also that this is in no way a problem or weakness in either system. It means they're different systems, goddammit.

If it was up to me, Xwayland wouldn't exist. I understand why it does, and that it does make things easier for developers who need to support both, and users who have multiple apps with multiple windowing requirements. It's a bandaid though, and one that is particularly dangerous because it re-enforces the idea that Wayland is just X2 and that they're fully compatible.

What interests me in the Wayland world right now is the idea of a small, modular, full-stack Wayland compositor API. There are several 'kiosk' based compositors for single applications (cage), but these aren't complete solutions. It is possible to get much closer to the metal, and that's where I want to be so that I can build my own APIs on top - I don't want to live on top of X, and I certainly don't want to live on top of X on top of WL. I want a pure solution that hides as little as possible, exposing the interesting bits.

33. TODO collect more data

  • State "TODO" from [2024-03-01 Fri 15:27]

https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/ weather - music - etc

34. NOTE On blocks and devices

  • State "NOTE" from [2024-03-02 Sat 21:30]

/dev In Linux, everything is a file.

dev contains special device files - usually block or character device.

major, minor = category, device 0, 5

mknod - create special device files

redhat hints

dd if=/dev/zero of=myfile bs=1M count=32
losetup --show -f myfile
ls -al /dev/loop0
losetup -d /dev/loop0 #teardown
echo "sup dude" > /dev/loop0
dd if=/dev/loop0 -bs=1
dd if=/dev/nvme0 of=/dev/null progress=true
#pacman -S hdparm
hdparm -T /dev/nvme0
modprobe scsi_debug add_host=5 max_luns=10 num_tgts=2 dev_size_mb=16

sparsefiles: create with C, dd, or truncate

truncate --help

test mkfs.btrfs on 10T dummy block device

dd if=/dev/zero of=/tmp/bb1 bs=1 count=1 seek=10T
du -sh /tmp/bb1
losetup --show -f /tmp/bb1
mkfs.btrfs /dev/loop0

diagnostics

iostat # pacman -S sysstat
blktrace # paru -S blktrace
iotop # pacman -S iotop

bcc/ trace: Who/which process is executing specific functions against block devices?

bcc/biosnoop: Which process is accessing the block device, how many bytes are accessed, which latency for answering the requests?

at the kernel level besides BPF we got kmods and DKMS,

compression/de-duplication can be done via VDO kernel mod

https://en.wikipedia.org/wiki/Dynamic_Kernel_Module_Support

35. NOTE save-lisp-and-respawn

  • State "NOTE" from [2024-03-02 Sat 22:57]
sb-ext:*save-hooks*

36. NOTE syslog for log

  • State "NOTE" from [2024-03-03 Sun 16:35]

sb-posix:

  • openlog syslog closelog
  • levels: emerg alert crit err warning notice info debug
  • setlogmask

37. RESEARCH sbcl-wiki

  • State "RESEARCH" from [2024-03-13 Wed 21:49]

37.1. IR1

37.2. IR2

38. NOTE DB Benchmarking

  • State "NOTE" from [2024-02-04 Sun 20:40]

RocksDB benchmarking tools

39. NOTE packy design

ID: 76ae24f5-46e8-4b91-8991-41245383d337
  • State "NOTE" from [2024-01-25 Thu 22:39]

39.1. Lib

39.1.1. Types

  1. Pack

    Primary data type of the library - typically represents a compressed archive, metadata, and ops.

  2. Bundle

    Collection data type, usually contains a set of packs with metadata.

  3. PackyEndpoint

    Represents a Packy instance bound to a UDP socket

  4. PackyEndpointConfig

    Global endpoint configuration object

  5. PackyClientConfig

    Configuration for outgoing packy connections on an endpoint

  6. PackyServerConfig

    Configuration for incoming packy connection son an endpoint

  7. PackyConnection

    Packy connection object

39.1.2. Traits

  1. PackyClient
    1. query
    2. install
    3. update
    4. login
    5. logout
    6. pull
    7. push
  2. PackyServer
    1. start_packy_server
    2. stop_packy_server
    3. start_packy_registry
  3. PackyRegistry
    1. register_pack
    2. register_user
    3. register_bundle

40. TBD investigate alieneval for phash opps

  • State "TBD" from [2024-03-25 Mon 18:56]

41. TBD

  • State "TBD" from [2024-03-25 Mon 18:57]

42. How it works

The backend services are written in Rust and controlled by a simple messaging protocol. Services provide common runtime capabilities known as the service protocol but are specialized on a unique service type which may in turn register their own custom protocols (via core).

Services are capable of dispatching data directly to clients, or storing data in the database (sqlite, postgres, mysql).

The frontend clients are pre-dominantly written in Common Lisp and come in many shapes and sizes. There is a cli-client, web-client (CLOG), docker-client (archlinux, stumpwm, McCLIM), and native-client which also compiles to WASM (slint-rs).

43. Guide

43.1. Build

  • install dependencies

    ./tools/deps.sh
    
  • make executables
    Simply run make build. Read the makefile and change the options as needed.
  • Mode (debug, release)
  • Lisp (sbcl, cmucl, ccl)
  • Config (default.cfg)

43.2. Run

./demo -i

43.3. Config

Configs can be specified in JSON, TOML, RON, or of course SEXP. See default.cfg for an example.

43.4. Play

The high-level user interface is presented as a multi-modal GUI application which adapts to the specific application instances below.

43.4.1. Weather

This backend retrieves weather data using the NWS API.

43.4.2. Stocks

The 'Stocks' backend features a stock ticker with real-time analysis capabilities.

43.4.3. Bench

This is a benchmark backend for testing the capabilities of our demo. It spins up some mock services and allows fine-grained control of input/throughput.

44. tasks

44.1. TODO DSLs

  • consider tree-sitter parsing layout, use as a guide for developing a single syntax which expands to Rust or C.
  • with-rs
  • with-c
  • with-rs/c
  • with-cargo
  • compile-rs/c

44.1.1. TODO rs-macroexpand

  • rs-gen-file
  • rs-defmacro
  • rs-macros
  • rs-macroexpand
  • rs-macroexpand-1

44.1.2. TODO c-macroexpand

  • c-gen-file h/c
  • c-defmacro
  • c-macros
  • c-macroexpand
  • c-macroexpand-1

44.1.3. TODO slint-macroexpand

  • slint-gen-file
  • slint-defmacro
  • slint-macros
  • slint-macroexpand
  • slint-macroexpand-1

44.1.4. TODO html (using who)

44.2. TODO web templates

create a basic static page in CL which will be used to host Slint UIs and other WASM doo-dads in a browser.

44.3. TODO CLI

using clingon, decide on generic options and write it up

44.4. TODO docs

work on doc generation – Rust and CL should be accounted for.

44.5. TODO tests

We have none! need to make it more comfy - set up testing in all Rust crates and for the lisp systems.

45. https://docs.gitlab.com/ee/administration/backup_restore/migrate_to_new_server.html

46. ideas

46.1. use branches for separate levels of expansion

  • or perhaps some other VC feature.. although I don't want any parallel to time, as if expansions occur in sequence. Thus things like tags don't feel quite right.

47. research

for libraries, always prefer defacto libs

47.1. screenshotbot-oss

  • monolithic repo, includes third-party dependencies
    • full quicklisp source
    • asdf, etc
  • addresses many of my concerns about running CL in prod
  • the repo is too heavy for my liking though
  • I do like the idea of having many systems though

47.2. DB

47.2.1. CLIENT

  1. mito

    ORM, sqlite, postgres, mysql support

  2. cl-dbi

    database independent interface

  3. sxql

    SQL generator

47.2.2. SERVICE

  1. sqlx
    • supports rustls, tokio
    • we should write the service queries using a common-lisp DSL!

      sqlx = { version = "0.7", features = [ "runtime-tokio", "tls-rustls", "any", "chrono" ] }
      

47.3. LOGGING

47.3.1. CLIENT

  1. log4cl

    supports slime well

47.3.2. SERVICE

  1. tracing
  2. tokio-console - monitoring tool

    works with tracing using the console-subscriber crate

47.4. UI

48. roadmap

I think roadmap should be product/management oriented. Agile terminology applies and things are grouped into sprints/trains/PIs/etc. There's really no need for that currently at least not until there's at least 10 or so contributors. The inbox.org workflow is much more 'agile' in fact, e.g. hackable.

I would like to make use of the core/inbox.el and ORGAN, perhaps move inbox.el to a new repository, where it will live as a package, which we can contribute to MELPA.

* 

49. Inbox Architecture

50. Inbox Metadata

50.1. Tags

Pandora's box. I guess we should make use of decorators/capitalization for significant tags, and the rest are user-defined.

50.2. IDs

Not entirely commited to uuid, but maybe it makes the most sense to use the timestamp one.

50.3. Status

A Status should be applied to tasks only.

We need a significant number of 'in progress' types, but each completed task will start as TODO and end up at DONE.

50.4. Dates

Deadline,Scheduled,DATE property,LOGBOOK

50.5. Log

The logbook should be used to record progress throughout the lifetime of an item.

50.6. Description

Descriptions can be blank, but tasks in need of review require a description.

50.7. Properties

  • Effort
  • Category

50.8. Links

I don't think we need org-roam for this? TBD. The thing is that I want link data to end up in a set of rocksdb instances instead of sqlite.

For the time being we should limit the scope to a set of properties:

  • PREVIOUS
  • REQUIRES
  • RELATED
  • PARENT

Note there's no forward references.

51. Notifications

discord bot? prob use rust, parse json or something

52. File Systems

52.1. BTRFS

BTRFS is a Linux filesystem based on copy-on-write, allowing for efficient snapshots and clones.

It uses B-trees as its main on-disk data structure. The design goal is to work well for many use cases and workloads. To this end, much effort has been directed to maintaining even performance as the filesystem ages, rather than trying to support a particular narrow benchmark use-case.

Linux filesystems are installed on smartphones as well as enterprise servers. This entails challenges on many different fronts.

Scalability
The filesystem must scale in many dimensions: disk space, memory, and CPUs.
Data integrity
Losing data is not an option, and much effort is expended to safeguard the content. This includes checksums, metadata duplication, and RAID support built into the filesystem.
Disk diversity
The system should work well with SSDs and hard disks. It is also expected to be able to use an array of different sized disks, which poses challenges to the RAID and striping mechanisms.

– Rodeh, Ohad and Bacik, Josef and Mason, Chris (2013)

52.1.1. [2023-08-08 Tue] btrfs performance speculation ::

  • https://www.percona.com/blog/taking-a-look-at-btrfs-for-mysql/
    • zfs outperforms immensely, but potential misconfiguration on btrfs side (virt+cow still enabled?)
  • https://www.ctrl.blog/entry/btrfs-vs-ext4-performance.html
    • see the follow up comment on this post
      • https://www.reddit.com/r/archlinux/comments/o2gc42/is_the_performance_hit_of_btrfs_serious_is_it/

        I’m the author of OP’s first link. I use BtrFS today. I often shift lots of de-duplicatable data around, and benefit greatly from file cloning. The data is actually the same data that caused the slow performance in the article. BtrFS and file cloning now performs this task quicker than a traditional file system. (Hm. It’s time for a follow-up article.)

        In a laptop with one drive: it doesn’t matter too much unless you do work that benefit from file cloning or snapshots. This will likely require you to adjust your tooling and workflow. I’ve had to rewrite the software I use every day to make it take advantage of the capabilities of a more modern file system. You won’t benefit much from the data recovery and redundancy features unless you’ve got two storage drives in your laptop and can setup redundant data copies.

        on similar hardware to mine?

        It’s not a question about your hardware as much as how you use it. The bad performance I documented was related to lots and lots of simultaneous random reads and writes. This might not be representative of how you use your computer.

  • https://dl.acm.org/doi/fullHtml/10.1145/3386362
    • this is about distributed file systems (in this case Ceph) - they argue against basing DFS on ondisk-format filesystems (XFS ext4) - developed BlueStore as backend, which runs directly on raw storage hardware.
    • this is a good approach, but expensive (2 years in development) and risky
    • better approach is to take advantage of a powerful enough existing ondisk-FS format and pair it with supporting modules which abstract away the 'distributed' mechanics.
    • the strategy presented here is critical for enterprise-grade hardware where the ondisk filesystem becomes the bottleneck that you're looking to optimize
  • https://lore.kernel.org/lkml/cover.1676908729.git.dsterba@suse.com/
    • linux 6.3 patch by David Sterba [2023-02-20 Mon]
    • btrfs continues to show improvements in the linux kernel, ironing out the kinks
    • makes it hard to compare benchmarks tho :/

52.1.2. MacOS support

  • see this WIP k-ext for macos: macos-btrfs
    • maybe we can help out with the VFS/mount support

52.1.3. on-disk format

  • on-disk-format
  • 'btrfs consists entirely of several trees. the trees use copy-on-write.'
  • trees are stored in nodes which belong to a level in the b-tree structure.
  • internal nodes (inodes) contain refs to other inodes on the next level OR
    • to leaf nodes then the level reaches 0.
  • leaf nodes contain various types depending on the tree.
  • basic structures
    • 0:8 uint = objectid, each tree has its own set of object IDs
    • 8:1 uint = item type
    • 9:8 uint = offset, depends on type.
    • little-endian
    • fields are unsigned
    • superblock
      • primary superblock is located at 0x10000 (64KiB)
      • Mirror copies of the superblock are located at physical addresses 0x4000000 (64 MiB) and 0x4000000000 (256GiB), if valid. copies are updated simultaneously.
      • during mount only the first super block at 0x10000 is read, error causes mount to fail.
      • BTRFS onls recognizes disks with a valid 0x10000 superblock.
    • header
      • stored at the start of every inode
      • data following it depends on whether it is an internal or leaf node.
    • inode
      • node header followed by a number of key pointers
      • 0:11 key
      • 11:8 uint = block number
      • 19:8 uint = generation
    • lnode
      • leaf nodes contain header followed by key pointers
      • 0:11 key
      • 11:4 uint = data offset relative to end of header(65)
      • 15:4 uint = data size
  • objects
    • ROOT_TREE
      • holds ROOT_ITEMs, ROOT_REFs, and ROOT_BACKREFs for every tree other than itself.
      • used to find the other trees and to determine the subvol structure.
      • holds items for the 'root tree directory'. laddr is store in the superblock
    • objectIDs
      • free ids: BTRFS_FIRST_FREE_OBJECTID=256ULL:BTRFS_LAST_FREE_OBJECTID=-256ULL
      • otherwise used for internal use

52.1.4. send-stream format

  • send stream format
  • Send stream format represents a linear sequence of commands describing actions to be performed on the target filesystem (receive side), created on the source filesystem (send side).
  • The stream is currently used in two ways: to generate a stream representing a standalone subvolume (full mode) or a difference between two snapshots of the same subvolume (incremental mode).
  • The stream can be generated using a set of other subvolumes to look for extent references that could lead to a more efficient stream by transferring only the references and not full data.
  • The stream format is abstracted from on-disk structures (though it may share some BTRFS specifics), the stream instructions could be generated by other means than the send ioctl.
  • it's a checksum+TLV
  • header: u32len,u16cmd,u32crc32c
  • data: type,length,raw data
  • the v2 protocol supports the encoded commands
  • the commands are kinda clunky - need to MKFIL/MKDIR then RENAM to create

52.1.5. [2023-08-09 Wed] ioctls

52.2. ZFS

– Rodeh, O. and Teperman, A. (2003)

  • core component of TrueNAS software

52.3. TMPFS

– Peter Snyder (1990)

  • in-mem FS

52.4. EXT4

– Djordjevic, Borislav and Timcenko, Valentina (2011)

52.5. XFS

– Wang, Randolph Y and Anderson, Thomas E (1993) – Adam Sweeney and Doug Doucette and Wei Hu and Curtis Anderson and Mike Nishimoto and Geoff Peck (1996)

53. Storage Mediums

53.1. HDD

– Zhang, Mingyu and Ge, Wenqiang and Tang, Ruichun and Liu, Peishun (2023)

53.2. SSD

– Do, Jaeyoung and Kee, Yang-Suk and Patel, Jignesh M. and Park, Chanik and Park, Kwanghyun and DeWitt, David J. (2013) – Zuck, Aviad and G\"{u}hring, Philipp and Zhang, Tao and Porter, Donald E. and Tsafrir, Dan (2019)

53.3. Flash

– Kwak, Jaewook and Lee, Sangjin and Park, Kibin and Jeong, Jinwoo and Song, Yong Ho (2020)

53.4. NVMe

– Kim, Seongmin and Kim, Kyusik and Shin, Heeyoung and Kim, Taeseok (2020) – specifications

53.4.1. ZNS

– Matias Bj{\o}rling and Abutalib Aghayev and Hans Holmberg and Aravind Ramesh and Damien Le Moal and Gregory R. Ganger and George Amvrosiadis (2021)

Zoned Storage is an open source, standards-based initiative to enable data centers to scale efficiently for the zettabyte storage capacity era. There are two technologies behind Zoned Storage, Shingled Magnetic Recording (SMR) in ATA/SCSI HDDs and Zoned Namespaces (ZNS) in NVMe SSDs.

zonedstorage.io – $465 8tb 2.5"? retail

53.5. eMMC

– Zhou, Deng and Pan, Wen and Wang, Wei and Xie, Tao (2015)

54. Linux

54.1. syscalls

54.1.1. ioctl

55. Rust

55.1. crates

55.1.1. nix

55.1.2. memmap2

55.1.3. zstd

55.1.4. rocksdb

55.1.5. tokio   tokio

55.1.6. tracing   tokio

  1. tracing-subscriber

55.1.7. axum   tokio

55.1.8. tower   tokio

55.1.9. uuid

55.2. unstable

55.2.1. lazy_cell

55.2.2. {BTreeMap,BTreeSet}::extract_if

56. Lisp

56.2. Reference Projects

56.2.1. StumpWM

56.2.2. Nyxt

56.2.3. Kons-9

56.2.4. cl-torrents

56.2.5. Mezzano

56.2.6. yalo

56.2.7. cl-ledger

56.2.8. Lem

56.2.9. kindista

56.2.10. lisp-chat

57. Refs

Adam Sweeney and Doug Doucette and Wei Hu and Curtis Anderson and Mike Nishimoto and Geoff Peck (1996). Scalability in the XFS File System, {USENIX} Association.

Djordjevic, Borislav and Timcenko, Valentina (2011). Ext4 File System Performance Analysis in Linux Environment, World Scientific and Engineering Academy and Society (WSEAS).

Do, Jaeyoung and Kee, Yang-Suk and Patel, Jignesh M. and Park, Chanik and Park, Kwanghyun and DeWitt, David J. (2013). Query Processing on Smart SSDs: Opportunities and Challenges, Association for Computing Machinery.

Kim, Seongmin and Kim, Kyusik and Shin, Heeyoung and Kim, Taeseok (2020). Practical Enhancement of User Experience in NVMe SSDs, Applied Sciences.

Kwak, Jaewook and Lee, Sangjin and Park, Kibin and Jeong, Jinwoo and Song, Yong Ho (2020). Cosmos+ OpenSSD: Rapid Prototype for Flash Storage Systems, Association for Computing Machinery.

Matias Bj{\o}rling and Abutalib Aghayev and Hans Holmberg and Aravind Ramesh and Damien Le Moal and Gregory R. Ganger and George Amvrosiadis (2021). ZNS: Avoiding the Block Interface Tax for Flash-based SSDs, USENIX Association.

Peter Snyder (1990). tmpfs: A Virtual Memory File System.

Rodeh, O. and Teperman, A. (2003). zFS - a scalable distributed file system using object disks.

Rodeh, Ohad and Bacik, Josef and Mason, Chris (2013). BTRFS: The linux B-tree filesystem, ACM Transactions on Storage (TOS).

Wang, Randolph Y and Anderson, Thomas E (1993). xFS: A wide area mass storage file system.

Zhang, Mingyu and Ge, Wenqiang and Tang, Ruichun and Liu, Peishun (2023). Hard Disk Failure Prediction Based on Blending Ensemble Learning, Applied Sciences.

Zhou, Deng and Pan, Wen and Wang, Wei and Xie, Tao (2015). I/O Characteristics of Smartphone Applications and Their Implications for eMMC Design.

Zuck, Aviad and G\"{u}hring, Philipp and Zhang, Tao and Porter, Donald E. and Tsafrir, Dan (2019). Why and How to Increase SSD Performance Transparency, Association for Computing Machinery.

58. query langs

Queries are extremely important in software development and having a robust query engine is a must for CC.

Our goal is to develop a query-language compiler (Q) which can be tuned at compile-time to meet the needs of any database backend.

The query languages that interest us most are derived from Prolog (/datalog) and SQL, but we won't be supporting all of their features - only the ones that can be reasonably coerced to all supported frontends.

Q will require an Intermediate Representation (IR) - the encoding will be based on S-expressions with a specialized reader. @article{btrfs, author = {Rodeh, Ohad and Bacik, Josef and Mason, Chris}, year = {2013}, month = {08}, pages = {}, title = {BTRFS: The linux B-tree filesystem}, volume = {9}, journal = {ACM Transactions on Storage (TOS)}, doi = {10.1145/2501620.2501623} } @INPROCEEDINGS{zfs, author={Rodeh, O. and Teperman, A.}, booktitle={20th IEEE/11th NASA Goddard Conference on Mass Storage Systems and Technologies, 2003. (MSST 2003). Proceedings.}, title={zFS - a scalable distributed file system using object disks}, year={2003}, volume={}, number={}, pages={207-218}, doi={10.1109/MASS.2003.1194858}}

@inproceedings{tmpfs, title={tmpfs: A Virtual Memory File System}, author={Peter Snyder}, year={1990}, url={https://api.semanticscholar.org/CorpusID:54156693} }

@Article{nvme-ssd-ux, AUTHOR = {Kim, Seongmin and Kim, Kyusik and Shin, Heeyoung and Kim, Taeseok}, TITLE = {Practical Enhancement of User Experience in NVMe SSDs}, JOURNAL = {Applied Sciences}, VOLUME = {10}, YEAR = {2020}, NUMBER = {14}, ARTICLE-NUMBER = {4765}, URL = {https://www.mdpi.com/2076-3417/10/14/4765}, ISSN = {2076-3417}, DOI = {10.3390/app10144765} }

@inproceedings{ext4, author = {Djordjevic, Borislav and Timcenko, Valentina}, title = {Ext4 File System Performance Analysis in Linux Environment}, year = {2011}, isbn = {9781618040282}, publisher = {World Scientific and Engineering Academy and Society (WSEAS)}, address = {Stevens Point, Wisconsin, USA}, booktitle = {Proceedings of the 11th WSEAS International Conference on Applied Informatics and Communications, and Proceedings of the 4th WSEAS International Conference on Biomedical Electronics and Biomedical Informatics, and Proceedings of the International Conference on Computational Engineering in Systems Applications}, pages = {288–293}, numpages = {6}, keywords = {Linux, journaling, ext4/ext3/ext2, filesystems, inodes, disk performances, file block allocation}, location = {Florence, Italy}, series = {AIASABEBI'11} }

@Article{hd-failure-ml, AUTHOR = {Zhang, Mingyu and Ge, Wenqiang and Tang, Ruichun and Liu, Peishun}, TITLE = {Hard Disk Failure Prediction Based on Blending Ensemble Learning}, JOURNAL = {Applied Sciences}, VOLUME = {13}, YEAR = {2023}, NUMBER = {5}, ARTICLE-NUMBER = {3288}, URL = {https://www.mdpi.com/2076-3417/13/5/3288}, ISSN = {2076-3417}, DOI = {10.3390/app13053288} }

@inproceedings{smart-ssd-qp, author = {Do, Jaeyoung and Kee, Yang-Suk and Patel, Jignesh M. and Park, Chanik and Park, Kwanghyun and DeWitt, David J.}, title = {Query Processing on Smart SSDs: Opportunities and Challenges}, year = {2013}, isbn = {9781450320375}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, url = {https://doi.org/10.1145/2463676.2465295}, doi = {10.1145/2463676.2465295}, booktitle = {Proceedings of the 2013 ACM SIGMOD International Conference on Management of Data}, pages = {1221–1230}, numpages = {10}, keywords = {smart ssd}, location = {New York, New York, USA}, series = {SIGMOD '13} }

@inproceedings{ssd-perf-opt, author = {Zuck, Aviad and G\"{u}hring, Philipp and Zhang, Tao and Porter, Donald E. and Tsafrir, Dan}, title = {Why and How to Increase SSD Performance Transparency}, year = {2019}, isbn = {9781450367271}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, url = {https://doi.org/10.1145/3317550.3321430}, doi = {10.1145/3317550.3321430}, booktitle = {Proceedings of the Workshop on Hot Topics in Operating Systems}, pages = {192–200}, numpages = {9}, location = {Bertinoro, Italy}, series = {HotOS '19} }

@article{flash-openssd-systems, author = {Kwak, Jaewook and Lee, Sangjin and Park, Kibin and Jeong, Jinwoo and Song, Yong Ho}, title = {Cosmos+ OpenSSD: Rapid Prototype for Flash Storage Systems}, year = {2020}, issue_date = {August 2020}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, volume = {16}, number = {3}, issn = {1553-3077}, url = {https://doi.org/10.1145/3385073}, doi = {10.1145/3385073}, journal = {ACM Trans. Storage}, month = {jul}, articleno = {15}, numpages = {35}, keywords = {storage system, solid state drive (SSD), flash translation layer (FTL), Flash memory} }

@INPROCEEDINGS{emmc-mobile-io, author={Zhou, Deng and Pan, Wen and Wang, Wei and Xie, Tao}, booktitle={2015 IEEE International Symposium on Workload Characterization}, title={I/O Characteristics of Smartphone Applications and Their Implications for eMMC Design}, year={2015}, volume={}, number={}, pages={12-21}, doi={10.1109/IISWC.2015.8}}

@inproceedings{xfs-scalability, author = {Adam Sweeney and Doug Doucette and Wei Hu and Curtis Anderson and Mike Nishimoto and Geoff Peck}, title = {Scalability in the {XFS} File System}, booktitle = {Proceedings of the {USENIX} Annual Technical Conference, San Diego, California, USA, January 22-26, 1996}, pages = {1–14}, publisher = {{USENIX} Association}, year = {1996}, url = {http://www.usenix.org/publications/library/proceedings/sd96/sweeney.html}, timestamp = {Wed, 04 Jul 2018 13:06:34 +0200}, biburl = {https://dblp.org/rec/conf/usenix/Sweeney96.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} }

@inproceedings{xfs, title={xFS: A wide area mass storage file system}, author={Wang, Randolph Y and Anderson, Thomas E}, booktitle={Proceedings of IEEE 4th Workshop on Workstation Operating Systems. WWOS-III}, pages={71–78}, year={1993}, organization={IEEE} }

@inproceedings {zns-usenix, author = {Matias Bj{\o}rling and Abutalib Aghayev and Hans Holmberg and Aravind Ramesh and Damien Le Moal and Gregory R. Ganger and George Amvrosiadis}, title = {{ZNS}: Avoiding the Block Interface Tax for Flash-based {SSDs}}, booktitle = {2021 USENIX Annual Technical Conference (USENIX ATC 21)}, year = {2021}, isbn = {978-1-939133-23-6}, pages = {689–703}, url = {https://www.usenix.org/conference/atc21/presentation/bjorling}, publisher = {USENIX Association}, month = jul, }

59. Mock Skel Readme

59.1. Overview

status
WIP
forge
Heptapod
mirror
Github

This system provides functions and macros for building and deploying project skeletons. This is not a general purpose templating system. It is specifically for my software stack.

59.1.1. Goals

  • vaporize boilerplate code and docs
  • integrate reasonably well with my tools (Emacs/etc)
  • object-oriented project management

59.2. Quickstart

Make sure you have sbcl installed:

sbcl --version

Then compile the program. This command produces a binary called skel in the project root:

sbcl --noinform  --non-interactive --eval '(ql:quickload :app/cli/skel)' --eval '(asdf:make :app/cli/skel)'

Run the binary without any args, which will print a skeleton of the current project directory (*skel-project*).

skel -h

Here's skel's skelfile:


This is just a form without the top-level parentheses - you're free to omit them in a skelfile.

59.2.1. describe

The describe command can be used to check the currently active skelfile, printing any errors and the parsed object.

skel show

59.2.2. TODO compile

Skelfiles can be compiled to produce a new project skeleton or update an existing one.

Try compiling skel's skelfile:

skel compile

You may also compile individual components of the project structure, for example, to compile the rules into a makefile:

skel compile --rules
cat makefile

59.3. Examples

59.3.1. Default

When you run skel init this is the basic skelfile that will be generated in the current directory, depending on the following contexts:

  • default user config
  • directory contents
  • cli args

With no cli args or user config and an empty directory the output looks like this:

;;; examples @ 2023-10-09.23:38:23 -*- mode: skel; -*-
:name "examples"

59.3.2. Imports

59.3.3. Multi

59.4. Tests

The unit tests may also be a useful reference:

(ql:quickload :skel/tests)
(in-package :skel.tests)
(setq *log-level* nil)
;; (setq *catch-test-errors* nil)
(setq *compile-tests* t)
(list (multiple-value-list (do-tests :skel)) (test-results *test-suite*))

59.5. API

TODO
CLOS-based core classes
TODO
EIEIO-based wrapper classes

Sorry, your browser does not support SVG. ;;; docs/vocab — project glossary -- mode:outline;outline-regexp:"[-]+" --

  • wrt : With Respect To
  • nyi : Not Yet Implemented
  • rt : Regression Testing
  • vm : Virtual Machine
  • asm : Assembly
  • comp : Compiler
  • eval : Evaluate
  • alloc : Allocate
  • ret : Return
  • tco : Tail-call Optimization
  • reg : Register
  • sbcl : Steel Bank Common Lisp
  • rs : Rust (the language)
  • el : Emacs Lisp
  • cl : Common Lisp
  • pcl : Practical Common Lisp
  • taomop : The Art of the Meta Object Protocol
  • lol : Let Over Lambda

Footnotes:

1

… perform computations