AI hype is calming down in some companies, but in others like in the one I am currently consulting for, we still on the hype.
Few days ago I was asked by my peer about some problem with the tool that I was mostly developing. This is fine are there are always some bugs in every software, though bug was concerning behavior on bad data and therefore out of scope really. Still I offered to help, regardless of the fact that correct fix would be to fix the bad data.
The correct fix, and its operation would be more or less, without going into the details:
- remove few thousands files
- remove another few hundred files
- run two scripts about 15 times.
- rename few files that is output of scripts
- check if everything is working
- run tests
So the only complication is quantity of changed files. Since this operation touches few thousands of files, the scope of such change would be big and hard to review. This is why running the tests after is so much of importance.
But the whole process is not complicated – there is just a lot of boring, testing and validation to be done after such operation.
I did it twice already as it is not complex, it is just boring and time consuming, but I did it on much smaller scale: about 10 and 20 files.
The difference is just scale.
My colleague responded with:
I could ask Claude to do that!
Immediately fallowed if such tool could consume few hundred files and rework their contents. I was surprised. I understand that this is connected to the tool I wrote and therefore I am more familiar with the process… but still it felt strange!
Why would you ask the tool to do it for you? Literally there are tools for that:
- delete key on your keyboard
- scripts we wrote and are available on every project we are working on
- tests
Why you need Claude to that for you? It does not makes sense.
Few days later we were discussing some issue connected to our release procedure. As every procedure ever it is not 100% full proof. Nothing ever is. People are just fine with procedures that works 90% of the time and rest is handled ad hoc. When you have problems with releasing your code to production though procedure should either include recommendation if you should rollback or fix the problem as soon as it is possible.
Another engineer proposed:
I can imagine, asking Claude that if we should rollback or not.
I am not really an expert of how fast is Claude with going through multiple projects and expecting an answer if you should either roll back or try to fix the deployment – but it does not seem to be sensible.
Like imagine! Your are a plumber and instead of immediately rushing to fixing broken pipe, because water will destroy customer house, you just standing in front of it, typing on the phone instead:
Chat GPT, if I have broken water pipe and it is leaking water how to fix it? Should I try to close water intake? Or should I try to fix the pipe instead by cutting part of it and replacing with new one?
This does not makes sense.
You are the specialist here! Sure, learn, use so-called-AI, see if this will be able to give you correct answer. Maybe it have an access to new technics that you do not know… But not when there is an emergency!
We still climbing the AI hype, but I have funny feeling about this whole thing not being so good for industry, now.
Either you are an engineer or your are agent manager.

