The fancy new AI world
In the last few years, many people are talking about and using artificial intelligence (AI) a lot. Especially in the IT and software developing sector it is a huge thing and already starts to change how people work and maybe even think.
It is impressive what you could do with the current available AI solutions nowadays.
But is it all cool and safe to use?
Do we know enough about that new technology?
Does anyone on this planet really knows how AI works?
Does our planet has the resources to power AI?
The following are a few examples and conclusions on why I personally did not in the past, do not currently and probably will not use AI tools.
How it is changing our work
From what I see some people already adopted AI tools to change how they get their work done, especially in software development:
- Some use AI chatbots as sparring partner to develop or validate new ideas, solution attempts and so on.
- Others use AI tools to generate code parts or even whole code bases and review the generated code afterwards.
- There are also specialised AI models to review existing code and give hints about improvements.
- Let AI generate unit tests for your code base and surely even more
Especially variant 2. is worrying:
Me and probably most developers prefer to write code to reviewing code. Still code reviews are very important and a
mandatory part of any software development. But if the “fun” part of writing code is done by an AI tool and a
software developer’s primary task is to review generated code, I wonder how satisfying this will be.
More importantly, if I imagine less experienced developers and career entrants mainly work this way, how should they get experience in software development? The primary competence needed for code reviews is experience.
I wonder how software developers should gain the necessary experience when they do not develop software any longer.
Software development is so much more than just writing code, it is about finding solutions, discussing different approaches for a given problem, debugging unknown behaviours, understanding existing code bases and so much more. AI might help in all of these areas but if at all, it should be a tool to help us. I am sure we will fail if we use it to replace human developers completely.
We should always be aware the current AI tools are rather error prone and we can never trust blindly any result of an AI tool. People who understand AI models better than me assume that based on the way current AI models work, they will never be free of mistakes or wrong results, e.g. the various forms of hallucinations.
My conclusion: will it help at all if AI tools take the parts of our work which we like to do and what remains is to question and review every result it got to ensure that the AI model did not introduce new security risks, bugs or unwanted features in general.
Data used to train the models
It is probably known that AI models need a lot of training to get any reasonable result. Those training requires huge amounts of data.
We know only a little where those data came from. What we know is that AI companies used illegal downloads of copyrighted work to train their AI models. This is something every individual would be sued for but big tech AI companies just do it and nobody really cares.
Also, it is known that some AI companies are continuously scraping the web for new content to train their models.
This is nothing bad per se but given how aggressively they crawl websites and what consequences this cause for website operators,
in terms of traffic costs but also reachability, cannot be anything we as society want to tolerate.
They ignore or even counter-fight common anti crawling techniques like robots.txt.
Privacy
Most of the commonly used AI tools are provided by US big tech companies and to effectively use them you need to register, I do not know but I assume you need to provide your real data. Once you are logged in to chat with the AI or use it in another way, the company knows who you are, what you use the AI model for and they can even log and evaluate all the queries you sent to the AI model. Whether they actually do this or not, you never know and there is no way to be really sure.
Furthermore, as far as I know, most providers offer an option to opt-out from using your data for training the AI model. But why it is an opt-out instead of opt-in in the first place? And then, you have to trust the company that they really do. How can anyone build that trust into companies which are ran by billionaires in a country with a more and more non-democratic regime.
This is especially critical if people “discuss” quite sensitive topics with the AI model like health or financial issues. But it is also relevant for software developers. Usually, to be able to give you any reasonable result in coding tasks, the AI model need to know about the context, about your requirements or even need access to your existing code base at all (think of code reviews or adding new features).
I personally would never share my ideas or code with an AI model ran by anyone I do not know and I cannot trust in any way. Additionally, even if the US companies could be trustworthy, they are still subject to US laws which enforce them to share their data with the US administration on request (see Cloud Act).
AI versus climate crisis
The use of AI models requires a lot of computing resources and so it requires huge amounts of energy and water for operating and cooling of data centres. The necessary training of AI models consumes even more energy.
To satisfy the enormously increased energy consumptions, the US government as well as some US AI companies are planning to build new nuclear power plants and are about to re-activate old, already deactivated nuclear power plants. At the same time, the US government repeatedly tries to stop building new renewable energy plants like off-shore wind power plants.
Why on earth?
What is wrong with them?
The ongoing global climate crisis will change our all lifes and at first the lifes of those who rather cannot use the new fancy AI stuff at all (global south). Rich US and European people might be able to handle that crisis a bit longer than others because they luckily live in an area of the planet where the consequences might happen a bit later and they have much more money to save themselves. Though in the end, we all have to deal with the consequences of destroying our planet constantly.
xAI powers some of their data centres with mobile methane gas turbines which are meant as temporary power sources for emergencies. They use them stationary. Those mobile power stations produce more emission than usual gas turbine power plants. It took more than a year until the Environmental Protection Agency action declared that use of mobile gas turbines as illegal.
In addition to the necessity of building new power plants, new electric power lines have to been built as well to transport the energy from the power plants to the data centres. Such costs are usually paid by end customers via the electricity bill. So even if people do not use AI, they will pay for it.
I wonder how people deal with the conflict that we already are acting way too little against the clime crisis and at the same time we use a new technology like AI which requires a lot of additional energy we basically do not have for a rather little outcome. Really how?
Conclusion
In my opinion, the negative aspects of AI in its current form in 2026 predominate the rather little productive outcome. I see that it is fun and nice for many people to “play” around with this technology and that it might seem to help solving problems, at a first glance.
But is it worth? - For me not.
Maybe I will regret this in a few years when the whole (IT) world only uses AI and everyone not having ten years experience in heavy AI usage will be lost.
But there are still all those downsides and especially the huge energy requirements. I will be happy to revise my decision once there is a solution so that AI will not actively destroy our planet.
Disclaimer: all of the above are my personal opinions based on my experience and what I read about AI and its use on various news sources. You do not need to agree with me but feel free to contact me for corrections or general discussing the topic.
Disclaimer 2: this text is written by me, manually and you might have guessed, without any AI involved including all mistakes and used em dashes.