Configurations for linters used at WNY
Updated 2026-04-15 00:54:45 -04:00
Updated 2026-04-12 23:03:31 -04:00
Used PI to create a bigbang simulator from scratch with no user generated code (only generated by the LLM) and a local instance of Qwen3.5 9B. Testing the Pi agentic coding harness from @mariozechner (`npm install -g @mariozechner/pi-coding-agent`)
Updated 2026-04-11 01:38:33 -04:00
Updated 2026-04-08 17:20:58 -04:00
Updated 2026-04-08 16:10:36 -04:00
A theme, config and collection of plugins for Neovim
Updated 2026-04-08 12:04:25 -04:00
History of my changes to ollama to enable SYCL support
Updated 2026-04-07 16:21:38 -04:00
My first C project--a Multi User Dungeon
Updated 2026-04-03 19:15:01 -04:00
A Hypothetical website template for bootstrapping new projects.
Updated 2026-04-02 22:59:09 -04:00
Updated 2026-03-17 18:14:33 -04:00
mcp and rest api server for phoenix home investments
Updated 2026-02-13 17:01:07 -05:00
A demake of a game about climbing a mountain.
Updated 2026-02-11 06:21:47 -05:00
configuration scripts/tool to allow for localized LLMs (currently supporting Ollama LLMs) to utilize bash tools and MCP servers via Python
Updated 2026-01-29 10:33:40 -05:00
Python Library for hashing and decoding files based on filename, creation datetime (microsecond sensitive, based on hash creation not file creation) allowing for the fingreprinting and indexing of files and file versions.
Updated 2026-01-26 12:14:14 -05:00
Automatic Background Generator for Hyprland
Updated 2025-12-02 15:13:49 -05:00
All-Inclusive RAG application with expansive functionality
Updated 2025-12-01 23:29:12 -05:00
NotebookLM open-source alternative
Updated 2025-10-27 17:04:38 -04:00
duck database RAG implementation for cloud-based RAG Database using huggingface datasets
Updated 2025-10-26 23:22:38 -04:00
Organize your Gnome overview applications by category
Updated 2025-10-09 00:24:18 -04:00
Transformers Latent-Space reasoning Auto Train Once -- a framework meant to utilize a latent-space reasoning model with 'Auto Train Once' pruning ( https://arxiv.org/abs/2403.14729 ) technique to ensure maximum performance with minimum neuron activation/creation/utilization in the LLM.
Updated 2025-10-03 16:00:26 -04:00