4 minute read

LETS SHAPE AI

Next Article
CUERPO EDITORIAL

CUERPO EDITORIAL

BY: HAYDÉE SOFÍA ROSADO

MAIL: HAYDEESRA@GMAIL COM INSTAGRAM: @HAYDEE JPG

Advertisement

Abstract: Artificial Intelligence has not an exact shape yet, it is merely the newest and probably most important technological introduction of this century Since it is still being developed, it can still be molded depending on social perception People are just starting to create their own understanding of it, because now it is more accessible for daily interaction It is precisely that neither its nature, nor peoples opinion on it, are set, that it is still very manageable But what should be taken into account when trying to stabilize its shape and understanding?

Key words: Artificial Intelligence, cultural and social understanding,interpretitiveflexibility,safety,development

Human society is constantly being reshaped through and around technology. When technological artifacts are introduced, people react to try to interpret and adapt to them. Artificial Intelligence (AI) is the most recent technology that is creating not only a lot of discourse, but also diverse repercussions on human interactions both between each other and with the world around them By this point, the main questions to be asked are those surrounding the technology and its nature, and those around a very possible transition towards AI and how it can be successful and beneficial

Interpretative flexibility argues that technological artifacts are culturally constructed and interpreted (Pinch y Wiebe, 1987). This means that technology is usually created with purpose in mind, however, there can be multiple uses that can also differ from the original one. That is because the tools are created based on social and cultural understandings, what is perceived as necessary, helpful, desirable, etc. But also, when technology is introduced, people take time to understand it, accept it and adopt it into their regular lives and they may do so in a variety of ways.

When it comes to AI, a big part of what is to be constructed comes down to how it’s programmed to work, and the norms that it most follow What is to be interpreted is how people, businesses and institutions are going to adopt it all over the world, in all parts of life And both terms generate a huge amount of questions, because the widespread use of AI has the ability of causing enormous changes.

To further understand what comes with AIs use, one can study OpenAI. Since it has the most powerful AI model created up to now (Hern, 2023), it is relevant to comprehend what a company like this one takes into account when developing such technology, and what they mention should also be considered, even when outside of their field of expertise. Stating the revolutionary capabilities of AI, they acknowledge the responsibility of guiding the trajectory and security aspects of it (OpenAI, 2023)

How do they do it? According to their product safety standards page, all Open AI technology is created around the principles of minimizing harm, building trust (supporting safe, beneficial applications of the technology), learn and iterating, and pioneering in trust and safety. In other words, this tool is being developed knowing that it can be abused or misused, and they introduce aspects to reduce the harm that it may cause. What is more, it is constantly developing, gathering feedback and solving existing issues.

Considering all AI, these are good bases for its development

However, AI, and in fact all technology is created to interact with people As Dr Suchi (2022) claims, workflows are as important as the underlying AI models, and it is important to evaluate both Yet often one is considered without the other This is referred to AI used in the medicine field, but it is relevant elsewhere, both within development and use. The interaction between people and AI creates a huge area of opportunity, and sadly, for failure too. Mainly because its implementation is varied around the world, as is its interpretation (Hagerty, Rubinov, 2019). The primary challenges in this aspect are social and cultural, spreading into other aspects of life.

On the one hand, at the development level, companies reinforce existing oppressive economic activities in the training of AI. Such as in the case of the Kenyan workers for Sama, a company used by OpenAI to make ChatGPT less toxic. They were paid less than two dollars an hour to go through violent internet content, in order to feed another AI that would classify this content as dangerous, and later train ChatGPT (Perrigo, 2023).

On the other hand, there is little regard for the social repercussions that come with AI It tends to worsen social divides and exacerbate social inequality, affecting mostly historically marginalized groups, and reproducing the same pattern between the Global South and the Global North (Hagerty, Rubinov, 2019) That is the main lack of AI developers, because even if not responsible for policy creation, it has to be pointed out that they do still have the ability to guide towards where the future of AI and human interactions are going.

By this point, AI is already part of human activities, with more companies and people using it every day. However, it is not yet a technology that has been fully adopted. When that happens, people no longer have debates about its use, and it is naturalized into daily life. What can be done now is to look for a deeper understanding of the impacts AI will have, especially for marginalized groups and countries in the Global South As much effort as companies are putting into developing safe AI, should also be put into making sure its implementation will be as smooth and as beneficial as possible Once again, the technology is directly connected to its users, and it can only be beneficial when it is for the widespread greater good, if not, it might be considered as another tool to reinforce dangerous and oppressive systems that already exist It is not as if the destiny of AI is set, but for its benefits to come through, they must be by far greater for as many people as possible, than its downsides.

References

Hagerty, A , & Rubinov, I (2019) Global AI ethics: a review of the social impacts and ethical implications of artificial intelligence arXiv preprint arXiv:1907 07892

OpenAI. (2023). Developing Safe & Responsible AI. OpenAI.com. https://openai.com/safety

OpenAI. (2023). GPT-4 is OpenAI’s most advanced system, producing safer and more useful responses. OpenAI.com. https://openai.com/product/gpt-4

OpenAI. (2023). Product Safety Standards. OpenAI.com. https://openai com/safety-standards

Pinch, Trevor J , Wiebe E Bijker 1987 “The Social Construction of Facts and Artifacts: Or How the Sociology of Science and Sociology of Technology Might Benefit Each Other ” The Social Construction of Technological Systems: New Directions in the Sociology and History of Technology Cambridge, MA: MIT Press

Suchi Saria. 2023. Not All AI Is Created Equal: Strategies for Safe and Effective Adoption. NEJM Catalyst. https://catalyst.nejm.org/doi/full/10.1056/CAT.22.0075

Hern. 2023. What is GPT-4 and how does it differ from ChatGPT? The Guardian. https://www.theguardian.com/technology/2023/mar/15/what-isgpt-4-and-how-does-it-differ-from-chatgpt

Perrigo 2023 Exclusive: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic Time https://time com/6247678/openai-chatgpt-kenya-workers/

This article is from: