AI: Friend or Foe?
From February, 2024
Among the fastest growing branches in the world in the last year has been AI, artificial intelligence. In the last year AI has come to the forefront of conversations among politicians, big businesses, and even philosophers. 2023 was a breakthrough year for photo manipulating technology as well as article creation programs including ChatGPT, creating countless situations of ethical and professional concern.
Researchers at schools including the University of Toronto have been using AI to advance scientific discovery while also working to ensure this rapidly advancing technology aligns with human values. 2024 will prove substantial in the new era of handling content generated by AI and the ability of discerning what was actually created by the human mind.
AI is presently reshaping how industries around the world promote themselves and create content. It is also creating challenges for we must confront. With the rapid development of applicable functions of AI it is the newfound obligation of regulators and law makers to keep pace.
The government of Canada is set to pass the Artificial Intelligence and Data Act (AIDA) later this year. This will be the first legislation set to regulate AI and will place responsibilities on businesses using AI technology.
“For businesses, this means clear rules to help them innovate and realize the full potential of AI,” says a post on the government of Canada website pertaining to AIDA. “For Canadians, it means AI systems used in Canada will be safe and developed with their best interest in mind.”
“The Government of Canada remains actively engaged in international discussions on AI regulations and continues to work with partners around the world to drive collaboration and ensure alignment in the responsible development and use of AI.”
Another issue of AI is what constitutes fair use of AI generated art based on pre-existing works of other artists. AI systems created more than 15 billion images last year, all of which drew from art and photography of actual artists. AI cannot produce original work and can only create images based on prompt words and pre-existing images the system has access to. Many artists have seen their work republished as a piece of AI ‘art’ and in some instances the artists signature or watermark can be seen in the AI generated image.
While there are benefits to using AI there are also horrendous flaws. Deepfake images -AI altered images, videos or audio recordings which create a seemingly real replication of an person- have created havoc in the lives of some. Many such situations involve women whose image has been used without their consent to create content depicting them in explicit and sexualized ways. One such situation in the United States involving a minor has led to a lawsuit.
Deepfakes can also involve politicians, a concern voiced by many as the presidential election is set to take place in November. While tech giants pledge to actively combat the creation and distribution of deepfake videos and images it is unclear how effective any company can be at this time. The creation of deepfake images can be used to create propaganda that can convince viewers that certain individuals have made statements which they have never actually made. Discerning deepfake footage or photographs from reality can at times be easy while other instances are more difficult to determine whether or not they are authentic. As the technology improves the ability to determine what is real and what is AI-generated will only become more difficult.
Teachers around the world are confronted with a new world of tools provided by AI and as many detrimental aspects. Among the issues universities and high schools are facing is the amount of essays teachers are seeing which have been written by AI. The trouble for teachers and students is determining what has been cowritten or written entirely but an AI program such as ChatGPT.
While there have been several instances of students caught using AI to write essays there have also been situations where professors have given a failing grade to students who cannot prove they honestly wrote their work. At Texas A&M a whole class was failed for what the professor deemed to be the mass use of ChatGPT. Many students were denied their diplomas as a result of failing the class. Ultimately the situation was of the professor using AI detection software incorrectly.
The University of Manitoba, as has many other post-secondary institutions in Canada, has released specific guidelines relaying the improper use of AI. The U of M has gone so far to suggest that professors not rely on AI sensing programs as the results have been inaccurate in many cases.
AI has also been used to write item descriptions for online products as well as articles published to reputable websites. In November 2023 Sports Illustrated was in the collective crosshairs after it was exposed that several credited authors on their website were AI writers, meaning the writers were fictional and every bit of content they were credited for had been AI-generated according to Futurism, a science and technology news outlet. The ‘authors’ even had personal biographies and photos, also AI generated. Founded in 1954, Sports Illustrated was once a Titan of the sports journalism industry. In January the outlet saw massive layoffs leading many to speculate about the future of the giant. This may hint to why they chose to use fake writers rather then employing journalists.
While Sports Illustrated tried to publish their AI authors in secret BuzzFeed has taken to the openly publishing AI-generated work. They have published AI-generated travel guides and quizzes and are still experimenting with the idea. Such decisions creates anxiety among many in the industry as laying off human journalists and writers in place of AI removes the humanity from the publications. Buzzfeed is still reeling from shutting down their BuzzFeed News branch last year, laying off 15 per cent of their workforce.
The History of AI
In the 1950s Alan Turing explored the mathematical potential of artificial intelligence, suggesting that computers could be designed to make decisions based on available information, similar to humans. In 1950 he wrote a paper titled Computing Machinery and Intelligence which explored the idea, discussing how to build intelligent machines.
Computers at that point in time were costly to operate and could only execute commands and did not have a capability for storing memory. By 1956 a program called Logic Theorist which could mimic human problem-solving skills was funded by Research and Development Corporation and presented at the Dartmouth Summer Research Project on Artificial Intelligence, organized to bring together many minds to explore AI. Logic Theorist is considered by many to be the first artificial intelligence.
Research in the filed continued through the decades. In the 1980’s at which point the concept of deep learning was popularized, which allowed for computers to learn from experience. Government funding came and went, all the while expanding the field and inspiring new researchers. By 1997 a chess playing computer program created by IBM called Deep Blue defeated world chess champion Gary Kasparov, the first time a reigning world chess champion was defeated by a computer.
AI today is rapidly developing. The laws we create today will not account for the problems which may still arise.