AI minds and the ‘inner loop’

Back in late 2022, I started working on my pet project codenamed ‘Armin’ (for ‘artificial mind’). It was all about creating an AI agent that would have some real mind-like faculties. Mind you, this was pre-chatGPT, so I was using the good old completion models, aka Davinci-003. I did a series of videos on the development of Armin until eventually my motivation dwindled as a lot of things changed in the AI LLM scene and I also felt kinda lost on how to proceed.

Although I’ve mostly been on the sidelines for the past 1.5 years, I’ve always kept a close monitoring of the new stuff getting released, new models and capabilities.

Last month I finally decided to pick up and resume a project that branched from Armin. It’s an AI personal digital assistant, that lives as a telegram bot (because that’s where I feel an AI assistant would be useful and easily reachable from anywhere).

With all the new stuff that has been coming out, I’ve managed to quickly get it up to a good level of quality using gpt4-turbo, whisper (so that I can “speak” via telegram voice messages), and gpt4-vision (so that I can send photos).

With all that boilerplate out of the way, I finally began dipping my toes on the crucial part that motivated me to start Armin in the first place: the inner loop.

The Inner Loop

Nowadays there are thousands of people using ChatGPT (or similar) models to build chatbots and all kinds of AI assistants. They all suffer from the same problem: they feel too much like an artificial chatbot. They reply when you ask a question, and that’s it. Sure, they can take a complex question or request, browse for you, call APIs, etc. But in the end, it’s always a call-and-response thing.

For me, a truly “alive” assistant would have to be much more than that. It needs to be pro-activate, needs to be aware of everything that is happening with my life (that I decided to share), needs to reflect on previous conversations and be able to put all of this together and come up with new ideas and take initiative to do its own actions or to ask/tell me something. All this has to happen without the user asking it to do so.

To achieve this, I began implementing the “inner loop”.

The inner loop has a modified system prompt that instructs the model on what it is supposed to do and how it differs from the “chat” part of the engine. I feed it a larger prompt with more context than usual and then let the AI “think” about it.

Together with function calling (or usage of tools), and the chaining of multiple messages, these newer models are now very capable of producing very satisfying results.

I’ve been using gpt4-turbo, mostly because of habit, but with today’s news on Anthropic Claude models also supporting function calling, I’m going to re-work some of my code to try it out as well. One of the problems of running an inner loop, is that it makes your API usage go high very fast so the lower costs of Claude will be very welcome for a hobbyist like me.

Here’s a sample response of what an Inner Loop message looks like

Based on the provided information:
- You have an event at 19:00 today for "Tennis".
- You were reminded to fetch and send the daily agenda at 7:00 and 8:00 this morning. At 8:00 I informed you that your agenda had already been shared.
- At 8:40 you provided your main tasks for today - go to the supermarket, do laundry, cook lunch. This was added as a fact.
- At 9:00 you mentioned feeling frustrated trying to figure out a bug in your code. I offered to help or provide a distraction if needed.
- At 10:56 you said you had a feeling you might be forgetting something. I checked and there were no other reminders, just the tasks you had listed earlier.
- At 11:10 you asked me to remind you about doing the laundry around 14:00, which I set a reminder for.

I don't have any other outstanding tasks, events or information that needs my immediate attention based on what has been provided so far.

The next advisable time to trigger the thinking process would be:
[2024-04-05 13:30:00]

This gives enough buffer before the 14:00 laundry reminder to check if any other context is needed and prepare accordingly.

Because the inner loop has access to the same(or more) functions, it can add new things (like reminders) as it sees fit, or even fix wrong info stored by the chat engine.

The result of the inner loop is fed to itself again and again, until the AI decides there’s nothing else to think or do.

As you can see, I let the thinking process also set its own time for the next “thinking trigger”. But I also change it manually, in case the user initiates a conversation, moving it closer in time. Otherwise, it is set to trigger every 30 minutes.

Until now I’ve been storing reminders and tasks in specific database tables with their corresponding models in code. But since the beginning, I felt that this way or working was a kind of cheat. I don’t want dedicated tables for stuff, I want it to be able to manage that information, carry it around, store it and retrieve it more naturally, and use it when appropriate. The inner loop seems to finally be the solution for that.

Hopefully, more news soon!*


* I’ll try to take less than a year before the next update

25 lessons from Rory Sutherland

Stolen from a LinkedIn post.

1. A flower is a weed with an advertising budget.

2. Consumer research is great at telling you what people did, it’s terrible at telling you what people will do.

3. To reach intelligent answers, you often need to ask really dumb questions.

4. Highly targeted advertising merely finds customers: less well-targeted advertising can actually create customers.

5. Trust grows at the speed of a coconut tree and falls at the speed of a coconut.

6. The opposite of a good idea can also be a good idea.

7. The best ideas do not emerge within disciplines, they emerge at the intersections between them.

8. Never call a behavior irrational until you really know what the person is trying to do.

9. Solving problems using rationality is like playing golf with only one club.

10. If we allow the world to be run by logical people, we will only discover logical things.

11. The trouble with market research is that people don’t think what they feel, they don’t say what they think, and they don’t do what they say.

12. What gets mismeasured gets mismanaged.

13. The human mind does not run on logic any more than a horse runs on petrol.

14. Not everything that makes sense works, and not everything that works makes sense.

15. Google understood that if you’re just a search engine and nothing else — people assume you’re a very, very good search engine.

16. It is much easier to be fired for being illogical than it is for being unimaginative. The fatal issue is that logic always gets you to exactly the same place as your competitors.

17. Evolution is like a brilliant uneducated craftsman: what it lacks in intellect it makes up for in experience.

18. Test counterintuitive things only because no one else will.

19. Once you free yourself from the need to arrive at a solution through logical steps, the world becomes messier, but the solution space becomes 20x bigger.

20. For a business to be truly customer-focused, it needs to ignore what people say. Instead it needs to concentrate on what people feel.”

21. It’s not enough to be right, you have to be persuasively right.

22. Don’t design for efficiency. Design for laziness.

23. We need to move from a world that solves for averages to one that solves for individuals.

24. All big data comes from the same place: the past. Yet a single change in context can change human behavior significantly.

25. If you never do anything differently, you’ll reduce your chances of enjoying lucky accidents.

Why I Keep My Kids Away from AI (For Now)

As a tech enthusiast and a parent, I often find myself exploring the intersection of technology, parenting, and education. Artificial intelligence, particularly large language models like the now-famous ChatGPT, has been making headlines for its potential in various applications, including education. While I am an ardent supporter of AI, I believe that we should approach new technology with caution, especially when it comes to our youngest children. At very young ages, the impact of both the “what” and the “how” is significant on brain development, and it’s likely that we cannot fully grasp the potential positive or negative consequences of certain choices.

There’s no denying that AI has come a long way in recent years. We’ve seen the creation of incredible tools that can produce stories, animate children’s drawings, and much more. These advancements have even allowed for heartwarming collaborations, such as the father who co-authored an entire book with his child, both in text and illustration, using AI.

While these developments are indeed inspiring, I can’t help but wonder if we’re skipping crucial steps in our understanding of what we’re doing. By relying heavily on AI, are we inadvertently stunting our children’s development of certain skills or cognitive abilities?

The Amish have a unique approach to technology, where they carefully assess the impact of new tech on their community and way of life before adopting it on a larger scale. I believe we can learn from their mindset, especially when considering the introduction of AI into our children’s lives.

I thought about writing this post when today I discovered my 7-year-old daughter using my old laptop (which she’s allowed to use for work and research purposes). I was curious about was she was doing because most of the time she asks me for help when she wants to use it. To my surprise, she had opened the word processor and started typing her own short story. A big title “The Magical Night” was on top of the page. This moment made me realize the importance of nurturing creativity and self-expression in our children, without the influence of AI.

Though I continue to be a strong advocate for AI and its potential benefits, I choose to keep my kids away from AI for now. I believe in giving them the opportunity to explore their own creativity and develop essential skills without the shortcuts that AI might provide. Every other day, I am amazed by the creativity my 4-year-old can spur out of her mind through her drawings and collages. Introducing such an amazing and absorbing tool as AI might unintentionally limit growth and exploration, which is a risk I am not willing to take.

Fact or Myth: playing video games while sick is bad for your kids

A couple of days ago I found an interesting post on my country’s subreddit, where people shared the most common childhood myths they remember growing up with. These were things that our parents, grandparents, and uncles, would say to us to discourage certain things. It was a fun read because I was familiar with most of them. I don’t know how popular these are in other places, but they seemed quite common throughout Portugal.

A few of my favorites were:

  • don’t swim/take a bath after eating, it can stop your digestion and you can die (wait at least 3 hours!!) – also, no icecream during that time. Addendum: if you eat anything (even a small grape) the time counter restarts 😀
  • don’t shave after eating (same reasons)
  • don’t swallow gum, it will glue to your stomach and kill you (funny how so many things kill you)
  • Eating oranges at night will make you sick (or, again, kill you)
  • Catching cold or rain will make you sick
  • Don’t drink water with melon
  • Getting a university degree will set you up for life (ok, dark humor on this one)

Another one I dealt with quite a few times when I was younger, was the fact that when I was sick, it was forbidden to watch tv or play video games because I wouldn’t recover as well as if I just rested.

Now, I am pretty sure many of these “myths” have some truth in them, even though most are probably a bit too exaggerated just to help get the point across. But I was curious about the relationship between screen time and recovering from a cold.

As a kid, I loved staying at home. Getting sick meant not going to school and staying at home to do whatever I wanted to do. I understand that, from my parent’s perspective, if I was allowed to enjoy and have fun while sick, I might end up getting “sick” more often 😉 So, how much truth is in that?

This issue recently popped up because of my own daughter getting sick and having a fever. While I was on the couch playing a video game with my older daughter, the younger, sick, sibling was next to me also watching the tv and wanting to play.

Fever is something that usually doesn’t scare me. More important than measuring a number (is it bad if it’s 38.5C? what about 38.0? or 39?), is the state the person is in. I’ve witnessed my own kids being perfectly fine and playful with high 38s, and, on other occasions, being more prostrated and weak with lower temperatures. Common sense and good observation should prevail here, imho, instead of a cold, numerical analysis.

So, if you consider that you are feeling well (even though recovering from sickness), is screen time where you are having fun, a bad thing? Does it hurt or slow-down recovery?

I decided I wanted to address this doubt and come to the bottom of it.

For my own research, I tried different search terms (and search engines) to land on different pages of opposing views. Because, as we all know (or should), it’s quite easy to find content that you want to find (the so-called confirmation bias).

I am making a distinction between playing video games and watching TV. I tend to prefer video games to TV because TV is a very passive activity that creates a more “numb” state. We do allow our kids to watch TV, but only a pre-selected set of cartoons or shows that we find positive/educational and only a limited set of time (30-40min max). On the other hand, video games (the good ones, of course), are active, they stimulate the brain, develop skills, involve you being part of a story, and can even make you move! (yes, I just got a Switch with some cool physical games).

I tried to get information from a varied set of sources, from the mom-and-pop blog to medical journals. I also browsed quite a few discussion forums where people would ask this same question (I am sick with X, can I play games?) and others would reply based on their own experience. This was interesting because you get to see how the results vary so much from person to person. Some with a straight ‘NO’ and others with the extreme opposite. I found this reply quite funny:

“I have an open stomach right now with dressing covering it and here I am playing Spec Ops: The Line
So yes, ALWAYS.”

Let me then distill all that I could find, in the way I have summarized it in my head (feel free to rebuke if you disagree, but back it up ;))

  • There’s nothing inherently wrong with having sick people (kids or not) playing active and positive video games
  • In many cases, these are even beneficial as they can help deal with pain or go through boring times at home (especially when you are stuck for several days!)
  • Games provide a way to keep an engaged, active brain, learning and developing skills. Softer or calmer games can provide an environment to relax and provide some psychological “comfort” during troublesome times.
  • The main “cons” of video games during sickness are not about the games themselves, but the dangers of losing touch with important physical needs (sleep, rest, hydration). The fault here is the lack of proper parenting and supervision, not the video games. This is where kids need help because they can easily get absorbed in something they really like and lose track of reality. Make sure they are resting at appropriate times (depending on the type of sickness and age), drinking a lot, and eating healthy. In other words, as a parent, pay attention. Video games are not a nanny 🙂
  • If you trust your child, let it also guide you on what is ok or not. A truly sick child with a bad headache will hardly want to be staring at a tv.

To finish off here is another comment from a forum. I think it sums it up quite well and brings us back to my initial observation on how to evaluate how bad a fever really is:

“Actually it is like a sickness level indicator for me, “I feel so bad that I cannot even play video games” or “I only need to rest a bit, might as well play video games while resting.”

Bonus: Can sitting too close to the TV damage your eyes? “The TV myth may have started in the 1960s, and at that time, it may have been true. Some early color TV sets emitted high amounts of radiation that could have caused eye damage, but this problem has long been remedied, and today’s TV and computer monitors are relatively safe”. The issue is how many hours you watch it, and what other activities your eyes do, not how close or how far you are from the screen.

Some references from my research:

October in Review

October was a “disaster” if I would rate it by the standards I had set in place in September. But I’m giving myself some slack here since lots of things have been going on.

I started a new job at a different company, and that, by itself, was a big thing. Well, half because of the new job and half because of leaving the old job. Turns out I get attached to people really easily and it’s hard for me to let go. I guess people in my field are used to switching regularly, but for me, being relatively new to this “job” thing, changing teams felt a bit awkward.

September in Review

What I read this month

Four Thousand Weeks made it to the top of my all-time favorite books. I’ll need to write it it’s own post.

Random Interesting Links

AI that expands its features by writing its own source code

When I finished reading David Shapiro’s “Natural Language Cognitive Architecture”, I got very excited with GPT3 because I finally grasped how powerful a LLM (large language AI model) can be. David’s contributions to the OpenAI community and his youtube videos have helped fuel my interest and desire to experiment with things for myself as well.

After many months of playing with the OpenAI playground, I decided it was time to start building something more serious. I identified 3 goals that would be worth exploring and developing:

  1. an AI agent with proper memory, layers of detail, summarization, etc
  2. the capability of fetching new and updated knowledge from the outside world (using the internet)
  3. give AI the possibility of expanding its features by writing its own source code

I decided to go with number 3 because it sounded to me the most challenging, interesting, and perhaps less explored option.

My productivity system in 2022

In this past year, I’ve done significant changes to my whole “productivity system”. These changes were impulsed by two major life changes.

One, I abandoned my work life as a freelancer, switching to a full-time position. This greatly reduced my need to track many independent projects and endeavors to explore work opportunities (which I’m very grateful for).
Two, I also decided to abandon my work as a GTD trainer, in order to fully focus on developing my skills as a developer.

This doesn’t mean I have stopped doing GTD. Those who are proficient in it will recognize the core aspects of GTD in my new system, even if on a more superficial look it may not seem like so.

Getting back to writing more

A few days ago I was revisiting my old blog and found it funny that one of my last posts was about going more offline, less dependent on cloud services, and having more time for important things.

It is funny because I feel that 3 years after I’m reliving the same thing again. Well, perhaps not exactly the same, but I’m at that point of the cycle where I’m fed up with too many internet/digital distractions. I’ve been missing writing and have been making plans to restart posting on my blog on a more frequent basis, but this hasn’t happened so far… until today.

A few months ago, when I committed(to myself) to read two books a month, I uninstalled all time wasters from my phone. And, mind you, I didn’t have many. I hate mobile games and useless mobile apps. But I was still spending too much time communicating with people (Reddit, Discord, Twitter). So I removed all of that and began taking my faithful Kobo e-reader with me everywhere. Fast forward a few months and, yes, I have mostly been able to read two books every month, which has been a great experience that gives me a lot of satisfaction.

Now I’m moving to the next step, besides reading I also want to get back to writing more. For weeks I’ve been drafting posts in my mind about stuff I want to write, and yet I never seem to find the time to sit and start. Work, family, hobbies, reading.. everything gets in the way… but does it? Until I removed all distractions from my phone I was also complaining that I didn’t have time to read two books a month.

So today I made the decision to let go of a bunch of Discord communities I’ve been participating in. It was a tough decision because I also get value from it, I learn, share, and help others… but my analysis of the signal vs noise ratio tells me this will be a good decision in the long haul.

I have no blog readers. I have no audience. I have nothing to sell. I want to write just because I like to do it. Not that I’m a great writer, or that aspire to become one and write books. I just like to write on my blog and note down my thoughts about things. Sometimes I like to come back to it and recall past findings and thoughts. Maybe I’m getting old, but I just enjoy quiet time to learn, observe, and reflect. Both reading and writing give me that.

So, today is the day I start and this is the first post of this new phase.

I re-arranged this blog to match this phase, more focus on the blog, and less on my work and projects. Keeping things simple, that’s my way 🙂

Advent of Code Day 2

Reminder: my challenge is to attempt to solve the problem without writing a single line of code. Ideally, just writing an initial prompt and get the whole code in one go. You can see the full source code (and the respective prompts) in my repository at https://github.com/nunodonato/advent-of-code-2021

Part 1

Today I decided to try and push copilot further. Instead of writing my own prompt, with step by step instructions, I basically pasted the challenge instructions directly. The only change needed was the first part where I instruct how to get the data from the input file.

Surprisingly, copilot generated the right solution without any further changes!

Part 2

Part 2 was more challenging, not sure if it’s because it relies on part 1. I tried to do the same, copying and pasting the instructions from part 2 and adding an extra instruction to get the input from the file.

The results were still quite good, but there were a couple of flaws that ended up giving the wrong result.

My struggles for today were to try and avoid these flaws, and it took me quite a number of tries to get it right.

The hardest one was to get copilot to understand that ‘down’ would ADD to the depth and ‘up’ would SUBTRACT. I’m guessing this was probably to the understanding of the model that down is usually a subtraction and up an addition. I tried rewording it, but it kept insisting with these operations.

The other flaw was that it frequently ignored the “aim” variable, even though there was a clear instruction to consider it.

After much tinkering and experimentation, I ended up deleting some lines from the original prompt and slightly changing the words on others. After some trial and error I finally got a working solution (not the 1st proposed one, though).

The final prompt became:

It seems like the submarine can take a series of commands like 'forward 1', 'down 2', or 'up 3':In addition to horizontal position and depth, you'll also need to track a third value, aim, which also starts at 0.

The commands also mean something entirely different than you first thought:
down X increases your aim by X units. 
up X decreases your aim by X units. 
forward X does two things: 
  1. It increases your horizontal position by X units. 
  2. It increases your depth by your aim multiplied by X.

Using this new interpretation of the commands, calculate the horizontal position and depth you would have after following the planned course. What do you get if you multiply your final horizontal position by your final depth?