Is Trying to Stop the Use of AI a Pointless Endeavour?

3 Minutes

When ChatGPT was first released in 2022, society skipped any scepticism of just how influent...

When ChatGPT was first released in 2022, society skipped any scepticism of just how influential it would be and went straight to discussing the nuances of how we would regulate its usage across every aspect of our academic and professional lives. 3 years on and almost every university and organisation have rules in place that outline the extent to which it can be leveraged when writing essays, completing interview tasks and using it throughout our day-to-day academic or professional lives. 


I think, however, that we’ve reached a point where stating the acceptable extent of its usage is as performative as it is redundant, and this is why.


 Humans are incredibly perceptive, and we’ve been able to spot the tell-tale signs of ChatGPT being abused since the day it was released. Spend 15 minutes scrolling LinkedIn and your 6th sense will kick in and you start to spot the signs.


It seems that ChatGPT and many other AI tools have a particular artistic accent, one that any lecturer or hiring manager can spot a mile off, one that has, for better or worse, killed the em dash. Anyone using ChatGPT to simply do the work for them will be caught out immediately and punished accordingly, regardless of whether you tell them they can or can’t use it.


Secondly, and possibly more importantly, if people use it correctly, you won’t be able to tell whether they have or haven’t used it and anyway, surely using a tool correctly isn’t something people should be punished for. You wouldn’t punish someone for using a calculator to solve an equation, Excel to analyse data, or Google to do their own research.


So why do we say that you can’t use ChatGPT, it’s just a tool that can enable us to achieve more, in less time. Are we no longer interested in efficiency?


The fact of the matter is, both user and software have come a long way since ChatGPT’s release. Users train their models, shaping prompts to mould them into an extremely effective tool. I think we’re doing a disservice to people using large language models correctly by banning them in their entirety.


Maybe I’m overestimating our ability to spot AI’s handywork or underestimating people’s proclivity towards abusing it and getting away with it.


That’s just my 2 cents, and if you have your own to throw into the pot, I’d love to hear from you in the comments.