This content originally appeared on DEV Community and was authored by bfuller
NOTE: These are musings as I navigate the world of AI. I suspect some blogs will be more linear than others.
I find all the AI hype exhausting. Ok, most hype I find exhausting because it rarely lives up to the hype. Then it’s just a giant pile of disappointment that I need to fake smile through someone’s excitement. I’m not a fan. That said, there is no way to get around it. Right now, it feels like the only way to survive this stage of tech is to dive into the deep waters of AI. So, that’s what I’m doing.
I’ve spent the last 2 months or so digging into AI tooling and I have lots of opinions. In fairness, I used V0 over a year ago to build out wireframes for 3Mór. I didn’t love it. In fact, I liked the experience of creating a mockup using Figma a whole lot more. I had more freedom and wasn’t bound by the content fed into the model. I could use my imagination and push boundaries so the eng team and I could find a happy medium. It’s terribly banal designing with AI tooling.
I don’t write code. I’ve worked in data off and on for a few decades. One thing to note about me is that I have dyscalculia, and it’s possible I have some niche brand of dyslexia, but it’s hard to say. I tend to think in pictures. I can understand complex math problems, but basic arithmetic is weirdly hard because the numbers and scale never stick and I can’t visualize it. Fun fact, it’s thought that Albert Einstein also had dyscalculia, and apparently so does Bill Gates.
Understanding my dev team’s pain more acutely
Why is that important? Well, periodically, I’ve tried my hand at coding. It generally goes poorly. I try following the book, but I get stuck at the same spot each time: Hello World. Every time I think, what now? Hello world is prescriptive and not interesting, it doesn’t tell me how the apps I use work, what goes into scaling an app, or how to get started. It’s just following directions. Which, to me, is not interesting. Enter AI tools.
One of the coolest things for me was being able to write a requirements doc, feed it with some simple context and tooling requirements, and have AI provide me with a framework. Now all of a sudden, I could start to look at the files and try to back track to figure out what the front end looked like and the code that created it. Reverse engineering is my favorite kind of engineering.
With AI, I’m getting real experience around the tools I’ve been building for over the last decade. The IDE will tell me to look in the console. A few months ago, I had no idea what it was talking about. Console? What is that? Where do I find it? I envisioned something totally different for console. I was weirdly disappointed with the term vs tool if I’m being honest. So I knew what logs were, but it never occurred to me how many and how different they are. This is where the dyscalculia makes every tool their own special snowflake, a real drag.
The bigger issue is learning when you have to override AI. I have to remember the order of operations to troubleshoot issues. Do I look at the console, logs, or those pesky Typescript problems that AI keeps telling me aren’t relevant. I have no idea why AI tooling things problems or errors being thrown by a typed language aren’t relevant, but I now have fixing them as a global policy. Anyway, while I still rarely understand what the problem is exactly, I’m starting to get a sense of what the problem could be related to and asking Gemini for help.
In short, I am learning a ton, everyday. I’m learning it at my own pace. Or until the AI companies ratchet back their offerings. I can ask AI and not feel dumb that I haven’t a clue what it’s talking about. All of that to say, I’m at a point where I want to learn how to navigate the data portion on my own. I want a Postgres database, timeseries, and an ML model. I don’t trust AI to do it right, it has really struggled with real data. I think AI can walk me through how to set everything up. Time will tell!
Data integrity lessons ignored are costing us all
When you work in and around data, you learn the most important lesson is, garbage in, garbage out. What’s become clear is AI companies made some critical errors when they set up their models. They didn’t clean the data. Next they tossed in everything and the kitchen sink to every model thinking that’s a good choice. Seriously, were there no data scientists helping build the models? It would have been more ethical, and ecologically sound to have smaller models with good data.
Our lack of understanding how databases work and the humans who train them was made abundantly clear when folks in the US bashed Deep Seek because it didn’t have insights on Tiananmen Square. Well duh and also have you read a US history book? They are just as inaccurate leaving out all manner of atrocity. So don’t use an AI model from China to build a History of China app. Problem solved. Add the data that makes sense for the problem you are solving. Get the data from expert sources.
Opportunistic tool creation
What’s next in my AI journey? My focus is on climate data. The US government has lots of freely available data but that’s slowly being removed which makes it impossible to plan gardens and climate resilience.
I’ll keep working on my apps and adjusting them so the garden and home apps have modern feature functionality instead of AI blocks of text with terrible accessibility. I’m feeling more comfortable with how frameworks come together and as I work on the apps I’m learning the patterns through repetition and reverse engineering.
I don’t imagine myself becoming junior developer level but I do hope to be more comfortable with how the front end and back end work together as the weeks go by. I’m really excited to dig into the data of it all. Seeing if I can learn more about machine learning and can find some insights that will help farmers, gardeners, and those living in the world to be more climate resilient. I want to make sure we can adapt how we grow food and keep our spaces liveable.
All in all the start of my AI journey has been steep. There is good and bad but in the end it comes down to the basics and best practices. I suspect as I get deeper in I’ll become more passionate about best practices instead of less. I’ll also have a better sense of how we can use this new toolset whether its AI, LLMs, or ML each has strengths. I look forward to seeing more folks define what those boundaries are and exploring them for myself.
This content originally appeared on DEV Community and was authored by bfuller