When is it ok to use AI?

I’m not the AI police. I’d quite like to be. I’ve met and worked with people who are sort of the AI police, but in a much more “this shouldn’t happen and this 500 page document explains the most ethical uses for AI and tech billionaires will absolutely take this seriously” way rather than an enforcement and actual outcomes-focused kind of way.

That said, with the use of AI and when it’s ok, I have thoughts. Opinions even.

Should you use ai for coding?

The self serving answer here is “yes absolutely” because I’ve essentially summoned the custom WordPress theme that I’ve wanted for ages out of thing air. Realistically, I’ve summoned (and I refuse to use the word created) a monster out of the plagiarized code of a million developers that came before, run through a blender, mixed up, and delivered by a clever parrot. Coded over the shoulders of giants.

The more nuanced answer is actually probably still leaning towards yes, and with the above I am putting myself down a bit. Creating a WordPress Theme with the affable Claude, the only AI assistant I imagine talking with a French accent, was easy because I know what goes into making a WordPress Theme. I’ve made one myself. It was terrible. This site sat with it in an unfinished state for about 5 years.

As Claude was talking through the steps it was taking, I understood why it needed multiple files, I knew where to upload them, and I knew what to specify in order to get them working the way I wanted them to. When things didn’t work properly, I knew the right sort of language to use in order to get it working properly. The transparency of the AI and the basic web design knowledge I have built for myself meant that the process felt seamless and accelerated, rather than a complete black box that spits out something fully formed and inscrutable from the void.

I work with some extremely talented and knowledgeable developers who use AI all the time and have been advocating for its use since ChatGPT started becoming a big deal, but they use it like an enhanced auto-complete, or a super specific search on Stack Overflow. There is zero chance of someone without the technical expertise and experience of these guys getting even a similar result from poking at AI and expecting actual magic.

Should you use AI for ART and design?

Using AI to produce a design or a piece of art inherently removes a core point of the creative process which is to express human thought, emotion, or messaging. Like many things AI does, at first glance, it can look absolutely fine, but interrogate it any further and it completely falls apart, even beyond noticing that person has seven fingers.

The designers I work with are almost closer to sales teams sometimes. They create something, but then have justify themselves and get stakeholders to go along with it. Unfortunately for designers, everyone thinks they can do their job and everyone has an opinion on the output.

Whilst art is an expression of the soul, design in general is an expression of ideas. Both of them fall apart in weird ways once something with neither soul nor ideas gets involved in the creative process. At first glance things can look fine. To an untrained eye they can look fine for slightly longer, but stare at the proverbial magic-eye long enough and you start to see a weird twisted dolphin emerge. You might not even be able to fully articulate what’s wrong with it, just left with an impression of a funny looking blowhole. An experienced designer will protect you from all of this.

That said, I know designers who produce amazing work that do use AI for parts of their job. Mood boards, or pulling ideas together can be done with generative AI, and some of the grunt work of reformatting images and resizing things and making the link between design and development is a natural fit for AI.

Generative AI for producing artwork is nebulous though. We’re already starting to get good at recognising when art feels off, or sloppy, so the second someone starts to try and pass off generated “inspiration” as a finished piece, then whoever is publishing that is opening the door to some major justified criticism.

I think some people say that it’s ok for concept art, which is similar to what I’m saying above, but concept art is in itself an entire discipline and takes tremendous talent. It would be horrible to sacrifice that at the altar of convenience and shave that corner off the process.

Interlude: The Human Override Blog Logo

The logo for this very blog is an excellent example of generative AI artwork being just somewhat off. I generated this on a bit of a whim to see what it would do with some fairly broad prompts, and thought the result was fun, but it’s absolutely something that doesn’t make sense. The French parrot insisted on adding a load of extra gimmicky text to make it more cyber-punk-y, and it added a spinning Illuminati style eye to the top. I got rid of most of the text, and changed the eye to a tortoise pattern for obvious reasons, and ended up keeping it mainly because I wanted to work out how to use svg files on the site and because I liked the little glitchy effect on it. But it’s not art, it’s not design, and I wouldn’t want to use this on anything professional.

This is almost definitely me documenting my first step on a very slippery slope and before you know it maybe I’m scrubbing all posts like this and rebranding myself on Linkedin as an animated logo design guru or something.

Should you Use AI for Writing?

Absolutely not. There is no way in which I can imagine using AI for writing to be a good idea, but maybe that’s because I’ve only seen it being used terribly, and I’m probably a bit biased because on a good day I consider myself to be a writer.

Once you get an AI to write something to communicate, then if someone is writing back to you then they might be using an AI too. At that point, you’ve got two people delegating their writing to algorithms and might just stop paying attention to what they’re saying to each other. Then you expand that out a bit further and you end up with dead internet theory and people are only pretending to talk to each other and having to do a hard blag when both of them have been invited to a meeting that has been convened by two AIs that are just trying their best to fit in to the corporate culture.

I think the most dystopian version of this was the AI personal message generator that I found on a fund raising platform when giving a donation for a co-worker’s impending baby. The message it would generate would be unbelievably saccharine, slightly gushy and emotional, and also slightly too specific to be anything you could possibly use. It makes you question every single comment you see on the internet however. It’s not even automated bots, just people being lazy and needing to fill the void with noise rather than thought. This has always been the problem with the internet – when you give everyone a voice you have to realise that not everyone has anything to say AND THAT IS FINE. Unfortunately, now, everyone has a voice and those with nothing to say can borrow something to say instead.

I have used AI to generate writing. I thought it might be a shortcut to hit a stupidly tight deadline (that I was responsible for setting in the first place) and what it produced was the sort of bland non-committal copy you’d get from a mediocre copywriter. It was utterly unusable. Maybe it can be used for helping to structure something, but otherwise its only use if filling proverbial column inches – content that was never meant to be scrutinized or even looked at in the first place, but that fills a space that would otherwise be left blank. A generative equivalent to ordering books by the foot to fill an empty bookcase, the difference being that none of the books make sense.

AI’s impact on learning: Mind The Skill Gap

The earliest challenge I heard about AI was relating to the legal industry. AI can supercharge a lot of the case law research and preparation that junior lawyers spend upwards of 25 hours a day researching and poring over. Talking with someone who was retraining as a journalist, they told the very high annual salary they had walked away from, but then qualified that it worked out at about £5 an hour.

That sounds like a net positive for junior lawyers, until you sit with that for five seconds and realise that of course there is now no longer any point in hiring a junior lawyer as there just isn’t that level of work for them to do any more. But this isn’t replacing fifty farmhands with a big tractor. The AI isn’t only replacing labour, it’s also replacing the gathering and cataloguing of knowledge. Those junior lawyers that make it through the legal career gauntlet ultimately become senior lawyers, and partners, and lawyers-in-charge-of-things, whereupon their day-to-day work changes significantly. They have less time stuck in research mode, but their minds are filled with years and years worth of research and they have an internal index of case law that can be summoned and linked together in a way that can only be achieved with a career’s worth of knowledge.

The AI purists might say that the senior lawyers therefore just need to be replaced with an LLM too, but at that point, we should all be uncomfortable with the idea of lawyer.exe building and running legal disputes. It becomes even more troubling when you realise a small number of those senior lawyers that have been culled would have become judges, so you’ve also got lawyer.exe interfacing directly with judge.exe.

Law is a really easy to understand example where human knowledge learned the hard way is required to have a human operating at the senior levels, but this isn’t unique to law. In fact, it’s pretty universal.

Junior developers become senior developers.

Junior designers become senior designers.

Journalism interns sometimes survive and become marketing managers.

If you cut out the junior role, you’ve cut the route for someone to learn how to dot heir job. Employing junior staff members is choosing a “carry” on your DOTA team – they won’t be much use for a really long time, but if you invest in them and they work hard enough, they can become invaluable and start paying off massively.

AI can make your life easier. It can take the load of tedious and repetitive boring tasks. But it can also just replace everything and I suspect that we’ll end up with a somewhat disenfranchised generation that finds out the hard way that sometimes the boring parts of life are important to contextualise and unlock the truly amazing parts of living.

AI usage is the real-world analogue to the moral of Click. If you start skipping all the things you find boring, you might find you’ve just skipped everything.