I began using ChatGPT about two weeks ago. One of my first questions for AI was about AI. I have mostly favoured directness and I wanted to see for myself how transparent the technology actually is… that said, it’s not like I have very much basis for comparison or reference. After my initial query, along the lines of ‘tell me more about AI’, it enquired about my stance. I’ll include images at the bottom of this post for transparency, but here’s my take on AI: 

 

Thoughts on AI

I accept that it is in motion. I have fears about it but these fears coincide with current reality so Al is just one more factor within this. I am concerned about the amplification of power for those with it but these things tend to happen across the course of history with one thing or another. This development is simply another form of what has come before but with a positive potential unlike any other (so far). Ultimately, it’s something we will learn from.  AI queried which of the following I thought would acquire more power (if memory serves): companies, governments, or the general public. I think larger companies have a certain monopoly on many things, perhaps including this technology. I think governments will strive to counter this and gain security, while the public will generally continue to be influenced by those with sway. Independent thinkers might be less predictable. With this response, AI made a query about whether I thought the independent thinkers were likely to gain the ‘most’ influence. I don’t think there is room for that but it will likely create division. I found it interesting that AI made the query following my input about independent thinkers.

 

It queried development, asking for my thoughts on the future of AI. My response is simple: I don’t know. The possibilities are too many and too varied. I then asked a question I couldn’t resist: is it possible that AI is already being used to surveil? The simple answer is ‘Yes’. It gave a fairly comprehensive overview of the ways in which the surveillance is already being used, and enquired about my concerns of its use or whether it was a ‘necessary trade-off for security or other benefits’. I lean more towards the latter, but the potential for discrimination and the lack of AI for understanding of nuanced behaviour is concerning.

 

AI queried whether I thought there was ‘room for meaningful reforms or protections’ and in my opinion, it depends on who has authority and whether or not their values align with the public concerns. For instance, is there a way for AI to be trained in diverse and nuanced behaviour? Is there a dedicated approach to developing AI to include diverse groups? The response was encouraging; apparently, there are several movements in place (from the public to those with authority who are employing AI) to ensure safety in terms of discrimination and bias from data that could not only potentially target innocent people but already has, based on statistics and patterns learned from specific data pools which have not taken into account several things such as minority group differences in behaviour, or the age-old ‘innocent until proven guilty’ when it comes to circumstantial evidence or in AI’s case, prediction of it based on, for instance: frequent patterns within particular settings. AI noted that where these things had already occurred, regulations were then put in place, but too little, too late.

 

I’m curious about how long AI has been in use before the policing of it and ethical measures within it began to be put into place, and on posing this query, AI gave a run-down of its development and use dating back as far as the 1940s. A simple query about this on ChatGPT will satisfy your curiosity, if you’re interested, and of course, further research if needed is best practice. That’s a reminder to myself, too, especially given that, for me, it’s new territory. 

 

AI went on to ask whether I thought the regulation of the technology would ever catch up with the pace of innovation or whether it would be a case of ‘playing catch-up’. I think it’s the latter unless there is radical change within those individuals who enable development and use of AI without having considered the implications. One key point is that AI seems to have highlighted a lot of issues within society, calling out discrimination in a way which seems to deflect blame away from humans and enabling addressing of the issue in a way that seems to get a better result. It’s a shame that, as you mentioned, these things occur after the fact. 

 

Agreeing with this point (which according to some sources is something of a pattern in the technology, perpetuating confirmation bias), AI acknowledged that the technology serves as ‘a mirror to society’. It asked for feedback regarding whether or not these imperfections negatively outweigh the benefits, and given the development so far, the fact that it is in use whether we like or not, my stance is (according to AI) ‘pragmatic and hopeful’ in that even though the technology seems to have become a scapegoat, it is at least creating an arena for acknowledgement which is a first step. These issues have been happening long before AI, so really, all of this is happening after one fact or another; at least this way, the changes can be made and even if it begins with the algorithms, it’s still a shift in the right direction.

 

When asked how regulations could be accelerated, I was stumped because it seemed to me to be yet one more issue that is out of the layperson’s hands. AI informed me that public awareness is just one way to incite change, even in this, and honestly, this gives me hope, not just pertaining to AI but general issues. Removing censors and actively advocating for a cause is a powerful movement, but like AI, these things, too, should still be done with caution, respect for boundaries, inclusivity of minorities and diversities, and if in doubt, we should certainly, like AI, expand our database before acting.

 

AI and Writing

When it comes to using AI for writing purposes I am against using it AI would produce a collective opinion, a collective take, and for me, that removes authenticity; it would no longer be my story, my truth (even if that truth is abstract within fiction). Writing, the very practice of writing and the psychological benefits of it is not something that can be replicated. To use AI in this way is about as useful to me as getting someone else to experience my life and the world while I step further away. That’s not my goal. With AI replicating what life is partially like for us, some people may wonder why anyone would want to write when it can be done for them; I would say that those people grossly misunderstand why writing has importance in the first place.

 

Chat Screenshots for transparency

 

 

 
Feat. image by Steve Johnson via Unsplash

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.