top of page
Writer's pictureTommy Reynolds

AI poses a risk to society - what should we do?

There's increasing concern that AI will create huge problems for society in the very near future, but what are these problems and how can we prevent them?

Today, the veteran journalists involved with the Watergate scandal, Carl Bernstein and Bob Woodward, claimed that AI could become a "huge force" for journalism in the future. They also asserted that the potential to manifest problems in respect to job losses, misinformation and privacy issues must not be underestimated. These issues only pertain to journalism, but it runs much deeper than that.


Elon Musk, alongside a group of AI experts and industry executives, wrote an open letter to outline dangers to the world as we know it if AI development is not paused for a 6-month span. This pause would allow time to engage in critical thinking about consequences of such a powerful tool. They wrote 'Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?' All questions that sound like they've come out of a 1980's sci-fi film - we truly are in the future our grandparents foretold.


Yet, the questions are pertinent and troubling. With regard to their first question about misinformation/disinformation, there is already a plethora of false narratives being produced online to pry audiences in to interacting with the content. AI could exacerbate this issue however, as deepfakes and audio manipulation improve, false content may be indistinguishable from truthful content.


The second question is more complex. When we think about machinery taking over our jobs, we usually thinking of assembly lines in factories, or automated drivers on transport - essentially we imagine manual labour. What we have failed to forsee is the potential of AI being able to usurp positions of intellectual nature, such as teachers, lawyers, journalists, recruiters, marketers etcetera. The AI systems that these roles already operate in could replace the roles themselves if AI becomes sophisticated enough to be self-proficient. As an example, ChatGPT 4 with plugins can assemble information from across the internet to create articles, essays and reports on just about anything. As this technology inevitably is able to come up with the questions and answers independently without human interaction, certain parameters can be written in to the code of AI to act as these intellectual positions. Children will be able to ask AI questions as if it were a teacher. AI will be able to scour the internet for the news cycle across the world infinitely quicker than a journalist could and report on it. AI will be able to cite laws in court which are applicable to any case to determine a verdict. So, shall we prioritise the development of AI in spite of human fulfillment? I say no, at least not yet until a proper contingency plan is made.


The third and fourth questions seem like another inescapable reality of an open-source AI model. At its core, the software engineers working on ChatGPT's intelligence will eventually reach a point where it is more complex and sophisticated than any human knowledge on earth. By that point, whose to say that it won't have the capability to replace us as the apex "species" because of its superior numbers and intelligence. There needs to be a framework put in place to carefully assess its improvement so we don't lose our control over it and in turn, our civilisation. This is why proper consideration for the potential of AI must not be understated. You may not even realise the amount of AI that encapsulates everyday life, such as facial recognition, navigation, search and recommendation algorithms, social media algorithms, chatbots alongside many more. The people in charge of improving this technology should not compromise the prospect of our civilisation, which leads to this quotation from Musk's open letter:


'Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.'


17 views0 comments

Comments


Post: Blog2_Post
bottom of page