2026 Resolutions

Table of Contents

Last year, I struggled to stick to my resolutions. Perhaps this is because I set myself too hard goals, or I didn’t try hard enough. Either way, it was not a successful experience. This year, I will write my resolutions in the prophetic past tense, as a method of improving how many goals I hit.

Heath / Lifestyle

In 2026, I maintained my reduced sugar intake.

In 2024, I greatly reduced the amount of sugar that I eat. I want to continue to maintain this, as well as start tracking it.

In 2026, I went on 52 bike rides / runs

This one is self-explanatory, 1 ride/run per week.

In 2026, I meditated on 250 days.

I’ve been meditating on & off for a while, but I think consistently doing it would be so beneficial to my health and well-being. I will aim for 5 - 10 minute sessions.

In 2026, I did 250 stretch/yoga sessions

I really need to build up more mobility to prevent myself from getting stiff when I’m older, these sessions will help with this.

In 2026, I did 5,500 push-ups

I’m also look to build up upper body strength, as well as forcing myself to do some exercise in the mornings.

Personal Development

In 2026, I read 12 books.

Month Book
January One Flew Over the Cuckoo’s Nest
February Lolita
March Ulysses
April n/a
May
June
July
August
September
October
November
December

In 2026, I read 50 scientific papers.

As part of this, I thought I’d read Ilya’s 30u30. I know a lot about the concepts in some of the papers/blogs, but I think reading them in more detail will be useful.

Title Short Review
End-to-end multi-modal product matching in fashion e-commerce Interesting paper, Zalando discuss using CLIP + a custom linear projection head to produce embeddings that allow them to find products that are the same across multiple sellers. One point of note was that they that images are so much more important than text when using these types of models to embed fashion information.
The First Law of Complexodynamics Initially, I was confused on why this is included in the 30u30. But, after thinking about it, it kind of makes sense for ML. We want to ensure that the entropy within the model is at the point at which something complex occurs. This is where we will get the most interesting results.
The Unreasonable Effectiveness of Recurrent Neural Networks Lovely blog post from Karpathy where he trains lots of different character based RNNs and shows how good you can get the results. It really inspires me to mess around with smaller models just to do fun things.
Prompt Repetition Improves Non-Reasoning LLMs This paper is kind of a funny one, and I think sums up modern AI research. Essentially, they found that by repeating the input prompt 2x or 3x, the model significantly outperforms just saying the prompt once - without increasing latency (since the prompt can be part of the cached pre-fill step). Super interesting, and one that is worth trying out any time you are working with LLMs.
Understanding LSTM Netorks The 3rd article in Ilya’s 30 papers, focused on the difference between LSTMs and RNNs. I’ve not worked with LSTMs in-depth, so this was a interesting article to read, and helped me understand them a little bit more.
Recurrent Neural Network Regularization The 4th Article in Ilya’s 30 papers. This is the first of his papers, and he discusses how to apply dropout to recurrent networks (specifically LSTMs). Previously, applying the dropout regularization technique did not work, however, he found that applying just to the non-recurrent connections, aka the inputs to the specific layer, not the cell state, or the hidden state, means that applying dropout doesn’t harm performance, and in fact improves it. I would assume that this is due to the models memory not being corrupted at every layer & timestep.

In 2026, I solidified my level B Portuguese.

My lessons that I started last year really helped with building up my confidence. Now it’s time to apply that & really reach independence when talking in Portuguese.

In 2026, I finished mojchine-learning

Mojchine-Learning is my attempt at writing various ML techniques from scratch in both PyTorch (Easy) and Mojo (Much Harder). I think this will be very useful to finish.

Travel

In 2026, I visited 10 places outside of London.

January

Flaine, France

Flaine

February

Old Trafford, Manchester, England.

Old Trafford

Monthly Reflections

January

I have been progressing well towards my goals in January. I’m behind on a lot, but I feel like I’ve been puttingin the effort to at least try to do them, which I think will set good groundwork for future months. Only thing to work on is reading papers, for which I have to come to terms with the fact that I won’t remember everything from the paper, just the important bits. It’s a different style of reading than what I’m used too, and one I’m looking forward to practicing.

February

Much better progress on reading papers this month - I read 3, which is 75% of the target. The other numerical targets are progressing well which is good, although I definitely need to meditate more.

I have not worked on gpt-cotts or mojchine learning this year at all, so this is going to be something I start thinking about from March. Otherwise, I’m happy with my progress.

March

Life is progressing well in March. I’ve started tracking running & bike rides as part of my 52 now. I’ve also started practicing Portuguese more consistently, which is really helping w/ my lessons. Need to pick up paper reading a little more, but other than that, it’s progressing!