AI minds and the ‘inner loop’

Back in late 2022, I started working on my pet project codenamed ‘Armin’ (for ‘artificial mind’). It was all about creating an AI agent that would have some real mind-like faculties. Mind you, this was pre-chatGPT, so I was using the good old completion models, aka Davinci-003. I did a series of videos on the development of Armin until eventually my motivation dwindled as a lot of things changed in the AI LLM scene and I also felt kinda lost on how to proceed.

Although I’ve mostly been on the sidelines for the past 1.5 years, I’ve always kept a close monitoring of the new stuff getting released, new models and capabilities.

Last month I finally decided to pick up and resume a project that branched from Armin. It’s an AI personal digital assistant, that lives as a telegram bot (because that’s where I feel an AI assistant would be useful and easily reachable from anywhere).

With all the new stuff that has been coming out, I’ve managed to quickly get it up to a good level of quality using gpt4-turbo, whisper (so that I can “speak” via telegram voice messages), and gpt4-vision (so that I can send photos).

With all that boilerplate out of the way, I finally began dipping my toes on the crucial part that motivated me to start Armin in the first place: the inner loop.

The Inner Loop

Nowadays there are thousands of people using ChatGPT (or similar) models to build chatbots and all kinds of AI assistants. They all suffer from the same problem: they feel too much like an artificial chatbot. They reply when you ask a question, and that’s it. Sure, they can take a complex question or request, browse for you, call APIs, etc. But in the end, it’s always a call-and-response thing.

For me, a truly “alive” assistant would have to be much more than that. It needs to be pro-activate, needs to be aware of everything that is happening with my life (that I decided to share), needs to reflect on previous conversations and be able to put all of this together and come up with new ideas and take initiative to do its own actions or to ask/tell me something. All this has to happen without the user asking it to do so.

To achieve this, I began implementing the “inner loop”.

The inner loop has a modified system prompt that instructs the model on what it is supposed to do and how it differs from the “chat” part of the engine. I feed it a larger prompt with more context than usual and then let the AI “think” about it.

Together with function calling (or usage of tools), and the chaining of multiple messages, these newer models are now very capable of producing very satisfying results.

I’ve been using gpt4-turbo, mostly because of habit, but with today’s news on Anthropic Claude models also supporting function calling, I’m going to re-work some of my code to try it out as well. One of the problems of running an inner loop, is that it makes your API usage go high very fast so the lower costs of Claude will be very welcome for a hobbyist like me.

Here’s a sample response of what an Inner Loop message looks like

Based on the provided information:
- You have an event at 19:00 today for "Tennis".
- You were reminded to fetch and send the daily agenda at 7:00 and 8:00 this morning. At 8:00 I informed you that your agenda had already been shared.
- At 8:40 you provided your main tasks for today - go to the supermarket, do laundry, cook lunch. This was added as a fact.
- At 9:00 you mentioned feeling frustrated trying to figure out a bug in your code. I offered to help or provide a distraction if needed.
- At 10:56 you said you had a feeling you might be forgetting something. I checked and there were no other reminders, just the tasks you had listed earlier.
- At 11:10 you asked me to remind you about doing the laundry around 14:00, which I set a reminder for.

I don't have any other outstanding tasks, events or information that needs my immediate attention based on what has been provided so far.

The next advisable time to trigger the thinking process would be:
[2024-04-05 13:30:00]

This gives enough buffer before the 14:00 laundry reminder to check if any other context is needed and prepare accordingly.

Because the inner loop has access to the same(or more) functions, it can add new things (like reminders) as it sees fit, or even fix wrong info stored by the chat engine.

The result of the inner loop is fed to itself again and again, until the AI decides there’s nothing else to think or do.

As you can see, I let the thinking process also set its own time for the next “thinking trigger”. But I also change it manually, in case the user initiates a conversation, moving it closer in time. Otherwise, it is set to trigger every 30 minutes.

Until now I’ve been storing reminders and tasks in specific database tables with their corresponding models in code. But since the beginning, I felt that this way or working was a kind of cheat. I don’t want dedicated tables for stuff, I want it to be able to manage that information, carry it around, store it and retrieve it more naturally, and use it when appropriate. The inner loop seems to finally be the solution for that.

Hopefully, more news soon!*


* I’ll try to take less than a year before the next update

Leave a Reply