Top
Adventures in ML trading - Part 1
Exploring Code LLMs - Instruction fine-tuning, models and quantization
Getting Things Done with LogSeq
All posts
Non Violent Communication
Introduction Nonviolent Communication (NVC), developed by psychologist Marshall Rosenberg, is an approach to communication that prioritizes empathy, deep listening, and the recognition of universal human needs. In his book Nonviolent Communication: A Language of Life, Rosenberg presents a framework designed to help individuals navigate conflicts, express themselves authentically, and foster meaningful connections. Rather than focusing on winning arguments or persuading others, NVC shifts the focus to understanding, ensuring that all parties feel heard and valued....
Syncing historical data from IBKR
Syncing Historical Data from IBKR: A Comprehensive Guide In this post, we’ll walk through a complete workflow for downloading historical data from Interactive Brokers (IBKR) and preparing it for analysis and backtesting. Why download from broker ? The core assumption is that we sync data directly from the broker, ensuring its accuracy while trading and backtesting. Once this data is downloaded, we can build ML data batches for training models....
Statistical learnings from a failed 2024 santa rally
Intro Santa Claus Rally is a well-known narrative in the stock market, where it is claimed that investors often see positive returns during the final week of the year, from December 25th to January 2nd. But is it a real pattern or just a market myth ? It is also claimed that next years returns are positively correlated to the Santa rally. But is it a real pattern or just a market myth ?...
Adventures in ML trading - Part 2
Preface In my previous post, I developed a simple mean-reversion strategy based on an oscillating signal calculated from a stock’s distance to its 50-day simple moving average. However, the results revealed a key shortcoming: the algorithm struggled to account for momentum, leading to poorly timed exits during parabolic moves—either too early or too late. In this post, we’ll dive into momentum and conduct an analysis to validate our assumption. If we can confirm that incorporating momentum enhances the strategy, we’ll move forward with developing a more advanced approach to leverage it effectively....
Adventures in ML trading - Part 1
Part 1/3 - Exporing the mathematical, statistical, and probabilistic nature of the market. Specifically, I attempt building a mean-reversion probability model, backtesting it against historical data, and understanding where/why it falls short. The results explain why simple statistical models fail to capture the complex beast that is the financial market. Nevertheless, this helps with foundational understanding and there is much to learn that I then iterate in the subsequent posts on the topic of ML based Trading.
June 2024 - Papers on Agents, Fine-tuning and reasoning
What’s included Multi-Agent RL for Adaptive UIs Is ‘Programming by Example’ (PBE) solved by LLM’s Learning Iterative Reasoning through Energy Diffusion LORA: Low-Rank Adaptation of LLMs Automating the Enterprise with Foundational Models MARLUI - Multi-Agent RL for Adaptive UI ACM Link: https://dl.acm.org/doi/10.1145/3661147 Paper: MARLUI Adaptive UIs Adaptive UIs - as opposed to regular UI’s, are UI’s that can actually adapt to the users needs. In this paper, the UI is adapting to optimize the number of clicks needed to reach the outcome....
Evaluating LLM Benchmarks for React
Introduction I previously wrote about writing react code with Deepseek-coder 33b model, and whether we could improve some of these shortcomings with the latest research in the LLM space But to really measure and mark progress, it would require the build of a benchmark to test various hypothesis around it. So in this post, I’m going to evaluate existing benchmarks that specifically measures LLM capabilities on coding capabilities. My goal is to be able to build a benchmark that can test their React/Typescript coding capabilities....
Can LLM's produce better code?
Introduction In my previous post, I tested a coding LLM on its ability to write React code. Specifically, I tried the currently leading open source model in the HumanEval+ benchmark leaderboard - DeepseekCoder:33b-instruct. I used this model in development for a few weeks, and published a subset of examples in the post. Even though I tried this on a relatively small problem size, there were some obvious issues that were recognisable to me, namely:-...
Deepseek coder - Can it code in React?
Introduction The goal of this post is to deep-dive into LLMs that are specialized in code generation tasks and see if we can use them to write code. Note: Unlike copilot, we’ll focus on locally running LLM’s. This should be appealing to any developers working in enterprises that have data privacy and sharing concerns, but still want to improve their developer productivity with locally running models. To test our understanding, we’ll perform a few simple coding tasks, compare the various methods in achieving the desired results, and also show the shortcomings....
Exploring Code LLMs - Instruction fine-tuning, models and quantization
Part 1/3 - Evaluating LLM’s that are specialised in code generation tasks, and evaluating their performance on writing code. This post starts with concepts and theory, while the next 2 parts evaluate specific code models.