Why I Keep My Kids Away from AI (For Now)

As a tech enthusiast and a parent, I often find myself exploring the intersection of technology, parenting, and education. Artificial intelligence, particularly large language models like the now-famous ChatGPT, has been making headlines for its potential in various applications, including education. While I am an ardent supporter of AI, I believe that we should approach new technology with caution, especially when it comes to our youngest children. At very young ages, the impact of both the “what” and the “how” is significant on brain development, and it’s likely that we cannot fully grasp the potential positive or negative consequences of certain choices.

There’s no denying that AI has come a long way in recent years. We’ve seen the creation of incredible tools that can produce stories, animate children’s drawings, and much more. These advancements have even allowed for heartwarming collaborations, such as the father who co-authored an entire book with his child, both in text and illustration, using AI.

While these developments are indeed inspiring, I can’t help but wonder if we’re skipping crucial steps in our understanding of what we’re doing. By relying heavily on AI, are we inadvertently stunting our children’s development of certain skills or cognitive abilities?

The Amish have a unique approach to technology, where they carefully assess the impact of new tech on their community and way of life before adopting it on a larger scale. I believe we can learn from their mindset, especially when considering the introduction of AI into our children’s lives.

I thought about writing this post when today I discovered my 7-year-old daughter using my old laptop (which she’s allowed to use for work and research purposes). I was curious about was she was doing because most of the time she asks me for help when she wants to use it. To my surprise, she had opened the word processor and started typing her own short story. A big title “The Magical Night” was on top of the page. This moment made me realize the importance of nurturing creativity and self-expression in our children, without the influence of AI.

Though I continue to be a strong advocate for AI and its potential benefits, I choose to keep my kids away from AI for now. I believe in giving them the opportunity to explore their own creativity and develop essential skills without the shortcuts that AI might provide. Every other day, I am amazed by the creativity my 4-year-old can spur out of her mind through her drawings and collages. Introducing such an amazing and absorbing tool as AI might unintentionally limit growth and exploration, which is a risk I am not willing to take.

Fact or Myth: playing video games while sick is bad for your kids

A couple of days ago I found an interesting post on my country’s subreddit, where people shared the most common childhood myths they remember growing up with. These were things that our parents, grandparents, and uncles, would say to us to discourage certain things. It was a fun read because I was familiar with most of them. I don’t know how popular these are in other places, but they seemed quite common throughout Portugal.

A few of my favorites were:

  • don’t swim/take a bath after eating, it can stop your digestion and you can die (wait at least 3 hours!!) – also, no icecream during that time. Addendum: if you eat anything (even a small grape) the time counter restarts 😀
  • don’t shave after eating (same reasons)
  • don’t swallow gum, it will glue to your stomach and kill you (funny how so many things kill you)
  • Eating oranges at night will make you sick (or, again, kill you)
  • Catching cold or rain will make you sick
  • Don’t drink water with melon
  • Getting a university degree will set you up for life (ok, dark humor on this one)

Another one I dealt with quite a few times when I was younger, was the fact that when I was sick, it was forbidden to watch tv or play video games because I wouldn’t recover as well as if I just rested.

Now, I am pretty sure many of these “myths” have some truth in them, even though most are probably a bit too exaggerated just to help get the point across. But I was curious about the relationship between screen time and recovering from a cold.

As a kid, I loved staying at home. Getting sick meant not going to school and staying at home to do whatever I wanted to do. I understand that, from my parent’s perspective, if I was allowed to enjoy and have fun while sick, I might end up getting “sick” more often 😉 So, how much truth is in that?

This issue recently popped up because of my own daughter getting sick and having a fever. While I was on the couch playing a video game with my older daughter, the younger, sick, sibling was next to me also watching the tv and wanting to play.

Fever is something that usually doesn’t scare me. More important than measuring a number (is it bad if it’s 38.5C? what about 38.0? or 39?), is the state the person is in. I’ve witnessed my own kids being perfectly fine and playful with high 38s, and, on other occasions, being more prostrated and weak with lower temperatures. Common sense and good observation should prevail here, imho, instead of a cold, numerical analysis.

So, if you consider that you are feeling well (even though recovering from sickness), is screen time where you are having fun, a bad thing? Does it hurt or slow-down recovery?

I decided I wanted to address this doubt and come to the bottom of it.

For my own research, I tried different search terms (and search engines) to land on different pages of opposing views. Because, as we all know (or should), it’s quite easy to find content that you want to find (the so-called confirmation bias).

I am making a distinction between playing video games and watching TV. I tend to prefer video games to TV because TV is a very passive activity that creates a more “numb” state. We do allow our kids to watch TV, but only a pre-selected set of cartoons or shows that we find positive/educational and only a limited set of time (30-40min max). On the other hand, video games (the good ones, of course), are active, they stimulate the brain, develop skills, involve you being part of a story, and can even make you move! (yes, I just got a Switch with some cool physical games).

I tried to get information from a varied set of sources, from the mom-and-pop blog to medical journals. I also browsed quite a few discussion forums where people would ask this same question (I am sick with X, can I play games?) and others would reply based on their own experience. This was interesting because you get to see how the results vary so much from person to person. Some with a straight ‘NO’ and others with the extreme opposite. I found this reply quite funny:

“I have an open stomach right now with dressing covering it and here I am playing Spec Ops: The Line
So yes, ALWAYS.”

Let me then distill all that I could find, in the way I have summarized it in my head (feel free to rebuke if you disagree, but back it up ;))

  • There’s nothing inherently wrong with having sick people (kids or not) playing active and positive video games
  • In many cases, these are even beneficial as they can help deal with pain or go through boring times at home (especially when you are stuck for several days!)
  • Games provide a way to keep an engaged, active brain, learning and developing skills. Softer or calmer games can provide an environment to relax and provide some psychological “comfort” during troublesome times.
  • The main “cons” of video games during sickness are not about the games themselves, but the dangers of losing touch with important physical needs (sleep, rest, hydration). The fault here is the lack of proper parenting and supervision, not the video games. This is where kids need help because they can easily get absorbed in something they really like and lose track of reality. Make sure they are resting at appropriate times (depending on the type of sickness and age), drinking a lot, and eating healthy. In other words, as a parent, pay attention. Video games are not a nanny 🙂
  • If you trust your child, let it also guide you on what is ok or not. A truly sick child with a bad headache will hardly want to be staring at a tv.

To finish off here is another comment from a forum. I think it sums it up quite well and brings us back to my initial observation on how to evaluate how bad a fever really is:

“Actually it is like a sickness level indicator for me, “I feel so bad that I cannot even play video games” or “I only need to rest a bit, might as well play video games while resting.”

Bonus: Can sitting too close to the TV damage your eyes? “The TV myth may have started in the 1960s, and at that time, it may have been true. Some early color TV sets emitted high amounts of radiation that could have caused eye damage, but this problem has long been remedied, and today’s TV and computer monitors are relatively safe”. The issue is how many hours you watch it, and what other activities your eyes do, not how close or how far you are from the screen.

Some references from my research:

October in Review

October was a “disaster” if I would rate it by the standards I had set in place in September. But I’m giving myself some slack here since lots of things have been going on.

I started a new job at a different company, and that, by itself, was a big thing. Well, half because of the new job and half because of leaving the old job. Turns out I get attached to people really easily and it’s hard for me to let go. I guess people in my field are used to switching regularly, but for me, being relatively new to this “job” thing, changing teams felt a bit awkward.

September in Review

What I read this month

Four Thousand Weeks made it to the top of my all-time favorite books. I’ll need to write it it’s own post.

Random Interesting Links

AI that expands its features by writing its own source code

When I finished reading David Shapiro’s “Natural Language Cognitive Architecture”, I got very excited with GPT3 because I finally grasped how powerful a LLM (large language AI model) can be. David’s contributions to the OpenAI community and his youtube videos have helped fuel my interest and desire to experiment with things for myself as well.

After many months of playing with the OpenAI playground, I decided it was time to start building something more serious. I identified 3 goals that would be worth exploring and developing:

  1. an AI agent with proper memory, layers of detail, summarization, etc
  2. the capability of fetching new and updated knowledge from the outside world (using the internet)
  3. give AI the possibility of expanding its features by writing its own source code

I decided to go with number 3 because it sounded to me the most challenging, interesting, and perhaps less explored option.

My productivity system in 2022

In this past year, I’ve done significant changes to my whole “productivity system”. These changes were impulsed by two major life changes.

One, I abandoned my work life as a freelancer, switching to a full-time position. This greatly reduced my need to track many independent projects and endeavors to explore work opportunities (which I’m very grateful for).
Two, I also decided to abandon my work as a GTD trainer, in order to fully focus on developing my skills as a developer.

This doesn’t mean I have stopped doing GTD. Those who are proficient in it will recognize the core aspects of GTD in my new system, even if on a more superficial look it may not seem like so.

Getting back to writing more

A few days ago I was revisiting my old blog and found it funny that one of my last posts was about going more offline, less dependent on cloud services, and having more time for important things.

It is funny because I feel that 3 years after I’m reliving the same thing again. Well, perhaps not exactly the same, but I’m at that point of the cycle where I’m fed up with too many internet/digital distractions. I’ve been missing writing and have been making plans to restart posting on my blog on a more frequent basis, but this hasn’t happened so far… until today.

A few months ago, when I committed(to myself) to read two books a month, I uninstalled all time wasters from my phone. And, mind you, I didn’t have many. I hate mobile games and useless mobile apps. But I was still spending too much time communicating with people (Reddit, Discord, Twitter). So I removed all of that and began taking my faithful Kobo e-reader with me everywhere. Fast forward a few months and, yes, I have mostly been able to read two books every month, which has been a great experience that gives me a lot of satisfaction.

Now I’m moving to the next step, besides reading I also want to get back to writing more. For weeks I’ve been drafting posts in my mind about stuff I want to write, and yet I never seem to find the time to sit and start. Work, family, hobbies, reading.. everything gets in the way… but does it? Until I removed all distractions from my phone I was also complaining that I didn’t have time to read two books a month.

So today I made the decision to let go of a bunch of Discord communities I’ve been participating in. It was a tough decision because I also get value from it, I learn, share, and help others… but my analysis of the signal vs noise ratio tells me this will be a good decision in the long haul.

I have no blog readers. I have no audience. I have nothing to sell. I want to write just because I like to do it. Not that I’m a great writer, or that aspire to become one and write books. I just like to write on my blog and note down my thoughts about things. Sometimes I like to come back to it and recall past findings and thoughts. Maybe I’m getting old, but I just enjoy quiet time to learn, observe, and reflect. Both reading and writing give me that.

So, today is the day I start and this is the first post of this new phase.

I re-arranged this blog to match this phase, more focus on the blog, and less on my work and projects. Keeping things simple, that’s my way 🙂

Advent of Code Day 2

Reminder: my challenge is to attempt to solve the problem without writing a single line of code. Ideally, just writing an initial prompt and get the whole code in one go. You can see the full source code (and the respective prompts) in my repository at https://github.com/nunodonato/advent-of-code-2021

Part 1

Today I decided to try and push copilot further. Instead of writing my own prompt, with step by step instructions, I basically pasted the challenge instructions directly. The only change needed was the first part where I instruct how to get the data from the input file.

Surprisingly, copilot generated the right solution without any further changes!

Part 2

Part 2 was more challenging, not sure if it’s because it relies on part 1. I tried to do the same, copying and pasting the instructions from part 2 and adding an extra instruction to get the input from the file.

The results were still quite good, but there were a couple of flaws that ended up giving the wrong result.

My struggles for today were to try and avoid these flaws, and it took me quite a number of tries to get it right.

The hardest one was to get copilot to understand that ‘down’ would ADD to the depth and ‘up’ would SUBTRACT. I’m guessing this was probably to the understanding of the model that down is usually a subtraction and up an addition. I tried rewording it, but it kept insisting with these operations.

The other flaw was that it frequently ignored the “aim” variable, even though there was a clear instruction to consider it.

After much tinkering and experimentation, I ended up deleting some lines from the original prompt and slightly changing the words on others. After some trial and error I finally got a working solution (not the 1st proposed one, though).

The final prompt became:

It seems like the submarine can take a series of commands like 'forward 1', 'down 2', or 'up 3':In addition to horizontal position and depth, you'll also need to track a third value, aim, which also starts at 0.

The commands also mean something entirely different than you first thought:
down X increases your aim by X units. 
up X decreases your aim by X units. 
forward X does two things: 
  1. It increases your horizontal position by X units. 
  2. It increases your depth by your aim multiplied by X.

Using this new interpretation of the commands, calculate the horizontal position and depth you would have after following the planned course. What do you get if you multiply your final horizontal position by your final depth?

Advent of Code Day 1

Part 1

The challenge begins. The first puzzle was quite easy to solve, but posed some challenges to write an efficient prompt.

Reminder: my challenge is to attempt to solve the problem without writing a single line of code. Ideally, just writing an initial prompt and get the whole code in one go.
You can see the full source code (and the respective prompts) in my repository at https://github.com/nunodonato/advent-of-code-2021

Although the problem was fairly simple, Copilot struggled with a very specific part of it.

First, I was impressed how easy it was to get it to read from a file which name I didn’t specify directly. I wanted each day to have it’s own input in an “inputs” folder, and have the filenames match. This was easy to accomplish and the solution presented was spot on!

$input = file_get_contents(__DIR__ . '/inputs/' . basename(__FILE__, '.php') . '.txt');

The next part involved a lot of trial and error with the right prompt phrasing. Interestingly the problems with the generated code was always EITHER:

  • looping the array in such a way the it would go out of bounds (trying to read the next position when already in the last, for example) – although this was ok as PHP doesn’t crash, I didn’t accept the solution
  • not incrementing the counter for every find, but instead adding the value of the line to the counter

I had to tinker with this particular line of the prompt until I managed to get it right. And even that was not perfect as the first solution proposed was incorrect. The second, however, was flawless

for ($i = 0; $i < count($input) - 1; $i++) {
    if ($input[$i] < $input[$i + 1]) {

Codex vs Copilot

After I got the problem solved and the solution accepted, I copy&pasted my prompt into OpenAI’s playground using the davinci-codex model.

The solution presented was quite different from what I got, and it was wrong.

I tried a few more, while changing the temperature value, and got a very similar solution(but still not correct) at 0.7

It’s clear that Copilot has some clear advantages over using Codex, as it can present us several solutions at once (and it’s free). I do wonder what are the parameters that it uses! Hopefully will get more insights into it during the challenge.

Part 2

Part two brought a more complex problem. My first attempt was over complicated and copilot was throwing a bunch of useless code.

I re-framed the problem in my mind and decided to approach it in a different way. (important reminder: AI will struggle to solve complex coding problems if you don’t know the steps to solve it yourself)

After simplifying and attacking it from a different angle, I was very close to a solution. Copilot was struggling again with the iteration aspect of it, almost always going out of bounds. I tried different wording but they either made it worse or seemed to be ignored completely.

Finally I tried to write something more verbose and explicit:

// iterate the array until reaching the last-3 element
// store in counter A the sum of three values from the current position
// store in counter B the sum of three values from the next position
// if counter B is greater than counter A, increment the total by one

That gave me a clean and simple solution which I accepted and passed the second part of today’s challenge.

Codex vs Copilot

Using temperature at 0.7 and the exact same prompt, Codex gave me exactly the same solution as copilot did.

Using a telegram bot to help me manage a solar off-grid setup

As some of my reader might know, I live off-the-grid, using solar power for electricity. My setup is quite small, a 9-panel ~2800W array, with 9.9kWh of battery capacity.

However, we do have a lot of electrical needs, since we cook with an induction cook-top, have a dishwasher and an electrical water heater. Back in 2019 I wrote Hacking my way through off-grid survival, in which, amongst other things, I wrote how I used a Raspberry Pi mini computer to poll my inverter and upload useful data to a public server. That way I could easily monitor my system. (Yes, modern inverters already have included cloud solutions). But that is not all, I then coded a small program to add some rudimentary “smartness” to my setup. Using a WiFi-enabled smart power plug, I could automate turning the water tank on or off, based on the solar production, battery capacity and time of day. This still required a lot of time and energy on my part, because I need to very frequently keep an eye on things (except when we have sunny days).

About a month ago I started playing with the idea of creating a personal assistant telegram bot. I wrote about it recently, if you are curious. I was mostly interested in getting reminders and my calendar commitments using an instant-messaging platform. I added a couple more extra features in the meantime. It’s public, you can go check it out and use it for yourself as well.

Then I started to connect the dots, it would be amazing if I could have this bot actually provide me with useful info when I needed it. That way, I could ignore the solar system completely and let the bot handle all the automation.

Being off-grid and depending on solar with limited power storage capacity, means we need to always keep an eye on “what’s coming”. Today we may be having a bright sunny day, but if we know tomorrow we will have less than 1 hour of sun, then we need to change the way we use power today. All this concerns are now delegated to the bot.

In order to get data we could trust, I chose to use Meteoblue’s amazing weather forecasting service. It’s probably one of the world’s most advance weather forecasting, with a high level of detail, extensive set of parameters and high resolution.

I identified 4 main things the bot should be able to do:

  1. Let me know in advance(2 days) of upcoming cloudy and/or rainy days
  2. Give me a forecast of sun hours for a 3-day period
  3. Give me direct access to raw meteograms with cloud layers, temperatures and precipitation
  4. Warn me and my wife in case there is a malfunction or dangerous state with the solar system

Number 1 is perhaps the most useful, as it is the one that allows me to stop worrying about the weather and let the bot handle it for me. I set it up so that everyday at 7am it checks the sun hours and precipitation forecast, and only sends me a message if there is anything worthwhile knowing. If the forecast is good enough to not worry, I won’t receive any message.

Number 2 and 3 are manual operations that I can request at any time. I use these If I’m a bit more picky about the forecasting and want to take a better look myself.

Last, but not least, Number 4 is used to periodically check if everything is OK with the solar production and the batteries. This basically checks two things:

  • solar production is > 0 during the day
  • battery capacity is > 40% (below these we know we have to be more cautious and monitor the batteries more frequently )

Here’s an example of it working when last week I unplugged the panels to replace a damaged cable

This has been great since now I stopped using my phone so much to refresh a page where I had raw data. I’m also more confident that if I’m away for 2 or 3 days my wife will be able to manage things easily without having to worry, or me having to check on things remotely.

Going Forward

All the recent work on the bot has mostly been dedicated to these private features which only me and my wife can access. I’d like to add more to it, so that the automation of the water tank heater could be done by the bot instead of my previous script. The bot, having detailed forecast available at all times can make much better decisions than anything else I’ve done before.

I’ll also introduce some more manual commands to be able to turn things on/off, and have the bot call certain APIs directly, instead of me having to use different apps on the phone to do so.