Synapse - Africa’s 4IR Trade & Innovation Magazine - 2nd Quarter 2023 Issue 20

Page 12

SYNAPSE

Africa’s 4IR Trade & Innovation Magazine

ENTERING THE NEW ERA OF AI REGULATION

AFRICA MUST REGULATE AI to reap its full benefits

GENERATIVE AI & the POPI Act

RETHINKING DATA

GOVERNANCE

AI DIGITAL COURSE targets

commonwealth public leaders

SOUTH AFRICA LAUNCHES AI INDUSTRY ASSOCIATION

SAAIA TO FOCUS ON RESPONSIBLE AI

2nd QUARTER 2023 | ISSUE 20

GUEST EDITORS

REGIONAL EDITORS

natasha-ochiel

In a bid to champion Artificial Intelligence in Africa, she co-founded the AI Centre of Excellence that aims to build sustainable value for AI in Africa by building capacity, building demand and building AI solutions through the Centre in Kenya.

darlingtonakogo

Darlington Akogo is a global leader in Artificial Intelligence. He's the Founder and Director of Artificial Intelligence at GUDRA, and its subsidiaries; minoHealth; an AI Healthtech company based in Ghana.

Naomi is the co-Founder and Chapter Director for women in Big Data SA; a registered NPO that is part of a global community of 17 000 women. The learning community works with strategic partners to cultivate tangible opportunities for women, unlock latent potential through accessible training and act as a catalyst for the advancement of women in Big Data fields.

INTERNATIONAL EDITOR

deonvanzyl

For 25 years, I have been an accomplished IT professional who is skilled in multiple languages with a strong background in development, security, digital forensics, automation, AI, management, and teaching. Through my work, I have been exposed to various industries, influenced public opinion, and networked with innovative leaders. As a roving Guest Editor, I write about new technologies that are reshaping the world such as Artificial Intelligence, VR & AR, Cybersecurity, Web3 and more.

2 SYNAPSE | 2ND QUARTER 2023
Deon Van Zyl (Norway) Senior System Developer Nordic Semiconductor Natasha Ochiel (Kenya) Co-Founder | CEO The AI Centre of Excellence natasha@aiceafrica.com Darlington Akogo (Ghana) CEO, minoHealth AI / KaraAgro AI & Member of United Nations ITU & WHO Focus Group on AI For Health Naomi Molefe MSc, Manager: Strategic Sourcing and Research at Discovery & Co-Founder & Chapter Director Women In Big Data, South Africa
naomimolefe
Read more editions 1st QUARTER 2022 ISSUE 15 Africa’s 4IR Trade & Innovation Magazine INSTADEEP Raises $100M VOXCROFT ANALYTICS Raises $2 Million AFRICAN TECH Raised $4bn in 2021 GAUTENG SOUTH AFRICA Valley of Africa? AI TECH INVESTMENT IN AFRICA reaches new heights AI EXPO $ £ R ¢ SYNAPSE Africa’s 4IR Trade & Innovation Magazine SYNAPSE BLUEPRISM Employee Experience Boosted by Advanced NETHERLANDS EMBASSY AI that Champions Inclusion SWITZERLAND Harvesting the Fruits of the AI Summer tinyML wave of Machine Learning WELCOME TO THE AI TECH CAPITAL OF AFRICA AI EXPO AFRICA 2022 5TH ANNIVERSARY EDITION CELEBRATING THE RETURN TO IN-PERSON EVENTS GOOGLE AI LAWYER IN AFRICA Africa’s 4IR Trade & Innovation Magazine SYNAPSE 2022 SEES BREAKTHROUGH GROWTH OF AFRICAN AI COMMUNITY AI EXPO AFRICA 2022 POST SHOW EDITION LARGEST GATHERING HAILED A MASSIVE SUCCESS DSN powered On-demand learning SOUTH AFRICA Intelligence institute ARM Partnership Network in Africa Meta AI Develops model to translate 55 African Languages

WELCOME FROM THE EDITOR

nickbradshaw

Nick is a Tech focused Executive / Entrepreneur helping clients, communities, practitioners & start-ups understand the value of Artificial Intelligence, Automation & Digital solutions in EMEA region. With 25 years experience in Europe, North America & Africa Nick has worked with a diverse set of Multi $bn global clients seeking to deploy and mature Enterprise grade software & cloud solutions. He is founder of the AI Media Group a new hybrid media, events, consulting & trade community offering clients insights into the growing 4IR opportunity in Africa. AI Media publish Synapse Magazine and run Africa’s largest Enterprise AI Trade Show – AI Expo Africa."

It's been a whirlwind 3 months since we last published Synapse. The world has now woken up to the fact that regulation and responsible AI is now what we as a species need to focus on unless we want to become extinct (according to some media reports). From calls for Government regulation AI in the USA, to the publishing of the EU AI Act, we are now seeing the AI debate shift from “its a nice to have option” to “its coming down the tracks for all of us at full speed”. Are we prepared in Africa for this global shift? What are we doing to embrace and regulate AI? In this edition of Synapse we focus in on some of these regulatory developments. From the launch of the South African AI Association to regulatory thought leadership, we take a closer look at what is happening on the ground across the continent. AI is now the hottest topic in town globally and with NVIDIA hitting a $trillion valuation in May, its clear those selling the AI “picks and shovels” in next wave of AI adoption gold rush are going to see their stocks rise. We at AI Media HQ cannot wait to see how the year unfolds and as vendors sign up for for the 6th Edition of AI Expo Africa later this year, its going to be pivotal year in this technology space. We hope you enjoy the 20th Edition of Synapse Magazine!

ABOUT SYNAPSE MAGAZINE

Synapse Magazine chronicles the 4th Industrial Revolution as it unfolds in Africa and plays a vital part in connecting the members of this rapidly growing trade community across the region. With a global readership, it puts Africa centre stage with a clear focus on the African 4IR innovation & investment narrative. We cover a range of technologies including; artificial intelligence (AI), intelligent automation (IA), robotic process automation (RPA), internet of things (IoT), big data, analytics & devices, as well as emerging standards, ethics and privacy concerns. Now entering its 6th year of publication, this unique quarterly trade publication is FREE to read on the ISSUU platform.

PUBLISHER

AI Media Group

Web www.aimediagroup.co.za

EDITOR

Nick Bradshaw

GUEST EDITORS

Natasha Ochiel (Kenya)

Darlington Akogo (Ghana)

Naomi Molefe (South Africa)

Deon Van Zyl (Global)

EDITORIAL & ADVERTISING ENQUIRIES enquiries@aiexpoafrica.com

ACCOUNTS PAYABLE & ADMIN

Mia Muylaert

mia.muylaert@aiexpoafrica.com

LAYOUT, DESIGN & PRINT

Karin Liebenberg

iCandy Design

Email: karinl@icandydesign.co.za

COVER IMAGE CREDIT

Alison Jacobson

Director at The Field Institute, South Africa (created using Mid Journey)

3 2ND QUARTER 2023 | SYNAPSE
C o n n e c t ing Knowledge Innovation Education Support B u s i n e s s www.aiexpoafrica.com Join the largest B2B AI, RPA & Smart Tech Trade Event in Africa & explore the growing 4IR opportunity in the region 2-3 NOV2023 SANDTON CONVENTION CENTRE JOHANNESBURG, SOUTH AFRICA Tickets on sale NOW Click here to register TODAY 2 Day Trade Show & Conference / 60+ Vendors 50+ Speakers & Panelists / Plenary Keynotes / Women in AI Zone Start-up Zone / AI Art Gallery / R & D Poster Presentation 4 AI Skills Workshops / 7 Networking Sessions / VIP Lounge / Meetup Bar

42 1,100+ Notable Signatories Just Signed an Open Letter Asking All AI Labs to Immediately Pause for at Least 6 Months

44 Europe Takes Aim at ChatGPT

46 Metaphysic Ceo Tom Graham Becomes First Person to File for Copyright Registration of AI Likeness

48 Laugh and Learn: The Surprising Benefits of Chatbots in African Education

1 2ND QUARTER 2023 | SYNAPSE SYNAPSE | ISSUE 20 | 2nd QUARTER 2023 Spotlight on Africa's Most Prolific Tech Investor p22 South Africa Faces Many Challenges in Regulating the Use of Artificial Intelligence p12 InfoReg Examines Regulation of ChatGPT & AI in SA p8 Contents Spending on AI In Middle East and Africa Region to Soar to $3 Billion in 2023 p37
Launch of the SA AI Association
Rethinking Data Governance 6 Africa Must Regulate AI to Reap its Full Benefits
InfoReg Examines Regulation of ChatGPT & AI in SA 11 Generative AI & the POPI Act 12 South Africa Faces Many Challenges in Regulating the Use of Artificial Intelligence
Global Perspective: The Era of Autonomous AI Agents
The Hard Power Economics of AI for South Africa
Leading the AI and CV Education Revolution in Africa by Augmented Startups 20 AI Digital Course Targets Commonwealth Public Leaders 22 Spotlight on Africa's Most Prolific Tech Investor 24 From Text Prediction to Conscious Machines: Could GPT Models Become AGIs? 29 SA’s Data, Cloud Blueprint in the Works, Says Minister 30 Kenya: AI Guidelines for Practitioners Launched 31 NASA-Funded Scientist uses EO Imagery and AI to Improve Agriculture in Uganda 33 Cesarean Deliveries are Rising in Rwanda – AI Could Reduce the Risks 35 Can AI Help Solve Diplomatic Dispute over the Grand Ethiopian Renaissance Dam? 37 Egypt Launches Charter for the Responsible Use of AI 37 Spending on AI In Middle East and Africa Region to Soar to $3 Billion in 2023 40 EU AI Act Explained
2
4
8
14
16
18
Launch of the SA AI Association p2

LAUNCH OF T HE SA AI ASSOCIATION

The South African Artificial Intelligence Association (SAAIA) is an industry body focused on promoting the advancement of responsible AI in South Africa by uniting practitioners across Commercial, Government, Academic, Startup and NGO sectors.

SAAIA seeks to encourage stakeholders in the adoption of responsible AI for the commercial and societal benefit of the citizens of South Africa with a primary focus on economic growth, trade, investment, equality and inclusivity.

From hype to a global reality the SAAIA vision has been shaped by analysing the global and local landscape, identifying needs and filling the blanks with research.  This has revealed both the challenges and opportunities AI and related smart technologies can bring to South Africa for both citizens and the wider economy. Our

vision is evidenced based with responsible, human centric AI as its foundation.

The SAAIA mission is to engage both individuals and organisations, novices and experts, those who are connected and not connected so no one is left behind. It is of vital importance that the opportunities Artificial Intelligence presents are possible and available for everyone to embrace. Therefore our mission is underpinned by ten key objectives.

 Serve as the voice of the industry

 Provide analysis & research to inform strategy & decision making

 Help National, Provincial & City Governments with policy making

 Unite buyers and suppliers to grow the economy

 Connect SMMEs to funding to create new companies & jobs

 Attract FDI to South Africa as the “4IR gateway” to Africa

 Help African smart tech companies find markets abroad

 Showcase the best of South African AI Innovation & Research

 Promote debate on inclusion, ethics, regulation & standards

 Share best practice & education resources for all

Dr Nick Bradshaw, the founder of SAAIA stated, “Our research has shown that

AI and related automation technologies are currently impacting 120+ traditional industries globally AND creating new opportunities and challenges in timescale never seen before.  The speed of this disruption is faster than any other industrial revolution that has gone before it.  SAAIA seeks to encourage stakeholders in the adoption of responsible AI for the commercial and societal benefit of the citizens of South Africa with a primary focus on economic growth, trade, investment, fairness, equality and inclusivity. Our founding partners and Advisory Board members are drawn from across multiple domains and passionate about the adoption of responsible AI. Many of our Advisory Board members are community builders in their own right, tackling key issues like education, inclusion, training, regulation, ethics, policy and investment.”

SAAIA will be holding a launch event and roadshow series kicking off at the Tshwane University of Technology AI Institute Hub on the 19th July. Individual membership is free and members of SAAIA gain access to resources, insights and news throughout the year. Members also receive discounts to join the association’s annual event, AI Expo Africa, which this year is being held in Johannesburg 2-3 November at the Sandton Convention Centre. Learn more and sign up today at https://saaiassociation.co.za/

2 SYNAPSE | 2ND QUARTER 2023
INDUSTRY NEWS
3 2ND QUARTER 2023 | SYNAPSE INDUSTRY NEWS

RETHINKING DATA GOVERNANCE for Just Public Data Value Creation and

Responsible AI in Africa

The data-driven digital revolution presents an unprecedented opportunity for many sub-Saharan African (SSA) countries to harness digital resources that have potential to accelerate their socioeconomic development objectives. Consequently, data governance is a top policy priority for many African governments to maximise the benefits of data access and transfers, while addressing multilevel related risks and challenges.

Why do we need to rethink the current state of data governance in SSA?

A sound data governance framework goes beyond privacy and security considerations and acknowledges that collecting data

alone has no value if it does not promote demand, usability, and impact, at scale. Robust data governance guides best practices for responsible, ethical data innovations particularly in the context of leveraging interdependent data-driven digital technologies such as artificial intelligence (AI) and machine learning (ML). A robust data governance framework is also an important component of enabling better quality and more granular data to achieve development goals, and ensure people’s digital rights are protected through policy tools and frameworks that ensure just public data value creation and responsible AI (RAI).

While many private companies can do more to share various forms of data for the common interest, and should be accountable for their role in extractive data practices at different points in the AI value chain, including the unprecedented wealth created by their unfettered data collection; the public sector is often a major data producer and collector and has an important role to play in improving public planning, service delivery, climate change mitigation efforts, and regional economic integration. Unlocking the economic and social value of public sector data through an enabling regulatory and policy environment can improve economic efficiency, may enhance democratic accountability, and encourage trust in the government.

In addition, high quality machinereadable data forms a powerful value chain, which can be used to create value (insights, intelligence, and products) with data dependent frontier technologies to create powerful analytics that can vastly improve innovation systems, sustainable digital development, public governance, and service delivery

However, many data governance frameworks in SSA tend to focus too narrowly on the collection and production side of data under the assumption that whatever data produced will be used and data has inherent value. While many SSA governments acknowledge the importance of open data and data governance to ensure the design, collection, management, use, and re-use of data to foster robust trustworthy data ecosystems , these efforts fall short as they are often limited to privacy and security concerns which may inhibit scaling the positive and transformational benefits of data innovations for the public good.

As shown in the figure 1 opposite, an effective data governance framework acknowledges that beyond mitigating the risks and harms associated with data, there are also considerations of creating public value from data. Reaping equitable benefits from data is highly dependent on acknowledging contextual realities to inform coordinated and transversal regulatory and policy frameworks that facilitate a conducive interoperable data ecosystem “ A sound data governance framework goes beyond privacy and security considerations and acknowledges that collecting data alone has no value if it does not promote demand, usability, and impact, at scale. ”

4 SYNAPSE | 2ND QUARTER 2023 REGULATION FEATURE

There are multiple challenges that impact effective data governance in SSA data ecosystems, these include the following:

i. There is often limited funding to support state-led data curation to ensure data quality, active data demand, interoperability, and ongoing management of public data through its lifecycle,

ii. Sources of public administrative data are fragmented, scattered, and poorly organized.

iii. Multiple data collection initiatives implemented by non-state actors, that claim to be for the public interest are designed without co-creation and buy-in from state actors and funded in siloes resulting in data that is: collected independently, not acknowledged as part of the official national statistics system, or even used to inform official public policy making and thus inadequate to address cross-cutting developmental needs.

This is not a comprehensive list, the aim is to highlight that the value of public data for development in SSA is largely untapped, since realizing public data’s full value

entails repeatedly reusing and repurposing data in responsible and creative ways to promote economic and social development. Therefore, a sound data governance framework requires that beyond privacy and security, institutions and stakeholders have the right incentives to produce, protect, use, re-use and share data along the data value chain, ultimately promoting just public data value creation.

What is just public data value creation and how can it support better data governance in SSA?

Just public data value creation denotes that data in itself has no value and existing power dynamics, exclusions and bias in data sets and data-driven ecosystems inhibit who benefits from public interest data-driven decisions. Just public data value creation emphasises a human-centred approach to funding, collecting, using, and sharing quality data for positive impact and data innovations that capture the multidimensional aspects of data as a digital public good (DPG), protect data subject’s privacy, support the social contract for data, and mitigates existing multidimensional inequities that arise and are exacerbated due to datafication of socioeconomic and democratic activity.

In addition to other inputs, my main contribution to the African Union Commission’s Data Policy Framework was to highlight the importance of enablers

and governance needed to create public value from data, but there needs to be more collaboration and co-creation between policy makers and data practitioners to promote data governance frameworks that captures the components of the data value chain and how infrastructure, laws and regulations, policies, technical standards, and institutions impact just public data value creation at various public policy governance levels. There needs to be more support, and funding for interdisciplinary localised research in African public policy and knowledge ecosystems that highlights the coordination amongst key stakeholder groups (i.e. private sector, civil society, academia and public authorities), political economy implications, and institutional structures necessary to promote just public data value creation that considers African realities, at local, national, regional, and global levels. Rigorous local research is crucial to encourage investments in critical infrastructure, understand the impact of datafication on socioeconomic activity, mitigate injustices that may be amplified by data, enable productive and socially valuable data innovations and uses, and ultimately capture the economically valuable characteristics of data as a factor of production to unleash wider social benefits and a fair data future.

5 2ND QUARTER 2023 | SYNAPSE REGULATION FEATURE
Figure 1: Framework to facilitate greater regulatory and policy coherence in a complex dynamic data ecosystem Source: Authors own

AFRICA MUST REGU LATE AI TO REAP ITS FULL BENEFITS

/ Read original article here /

Artificial intelligence (AI) regulation is needed to help Africa improve healthcare and eradicate many of its socioeconomic challenges.

This is according to experts participating in a panel discussion, titled: “Navigating the future of AI: How to respond to the impact of ChatGPT and generative AI across industries”, during the Africa Tech Week Conference, in Cape Town, last week.

The discussion focused on how new technologies, such as ChatGPT in the AI space, can lead to economic upliftment, through innovation and creativity in Africa.

They emphasised the important role of regulation in the application of the tech and who is developing it.

Launched by OpenAI in November, textbased ChatGPT has the ability to interact in conversational dialogue form and provide responses that can appear human. It can also draft prose, poetry or even computer code on command.

Ayanda Ngcebetsha, data and AI director at Microsoft, told the audience that Africans need to recognise AI is here to enhance and help them generate new content.

The regulation of AI is going to be possible if human augmentation is used, to make sure people work with the machine to create new models, she noted.

“Think of the generative AI model in the hands of a clinical professional. Giving the machine problem-solving skills will help accelerate the provision of clinical expertise for a patient. This is one of many points

“ Think of the generative AI model in the hands of a clinical professional. Giving the machine problem-solving skills will help accelerate the provision of clinical expertise for a patient. This is one of many points made by experts to show how this new tech can help Africa reach its full potential. ”

...continues on page 9

6 SYNAPSE | 2ND QUARTER 2023 REGULATION FEATURE

State of AI in Africa 2022

7 2ND QUARTER 2023 | SYNAPSE
Analysis of the 4IR in Africa –A Foundation for Growth Report Commissioned by the AI Media Group CLICK HERE to grab your copy TODAY INDUSTRY ANALYSIS
REPORT

INFOREG EXAMINES REGULATION OF CHATGPT & AI IN SA

/ Read original article here /

Tlakula highlighted the importance of developing a framework that will govern emerging Web 3.0 technologies, such as Microsoft-backed OpenAI’s ChatGPT, which has set social media abuzz with discussions around the opportunities and dangers of this innovation.

While such emerging technologies are expected to unlock infinite business opportunities across sectors, Tlakula believes gaining in-depth understanding of the data privacy implications is an important first step her office should take prior to introducing guidelines, or considering developing a framework for the uncharted territory.

This is according to advocate Pansy Tlakula, chairperson of the Information Regulator, speaking to ITWeb following a media briefing at its offices in Tshwane yesterday.

Last week, Italy became the first country to ban ChatGPT, saying the chatbot unlawfully collects personal data – breaching the country’s data privacy rules.

There are growing concerns over the potential risks of ChatGPT and other AI-

based technologies, relating to infringement of user rights, copyright protection and manipulation, as organisations across the globe race to rollout AI systems.

“We are aware of these new technologies and we have been having discussions about them. We believe that before we go all out regulating technologies, such as ChatGPT and artificial intelligence, we need to inform ourselves technically about these issues to gain enough understanding on the approach to take in introducing regulations.

“This is a very important step because I believe going forward, data privacy will be mainly violated through these emerging technologies. With many things happening out here, I am even fearful and wonder if regulation would be able to appropriately deal with the complex risks posed by these technologies,” Tlakula told ITWeb.

8 SYNAPSE | 2ND QUARTER 2023 REGULATION FEATURE
South Africa’s Information Regulator is holding internal discussions on how to approach the regulation of viral chatbot ChatGPT and other artificial intelligence (AI) technologies, to ensure they don’t violate data privacy laws.
Advocate Pansy Tlakula, chairperson of the Information Regulator

The Information Regulator is mandated to ensure organisations put in place measures to protect the data privacy of South Africans under the Protection of Personal Information Act (POPIA).

The office has been conducting high-profile investigations relating to Promotion of Access to Information Act and POPIA complaints received over the past year.

According to Tlakula, the regulator received 895 complaints relating to alleged violation of POPIA during the 2022/2023 financial year. Of these, 616 (68.8%) have been resolved.

When asked to provide a timeframe on how soon the regulator will start on the process of assessing and/or drafting regulatory guidelines for emerging technologies, Tlakula noted: “I’m not in a position to give you a timeframe because we also have so many things on our plate at the moment, which we are prioritising that affect South Africans right now.

“We are currently setting our sights on the direct marketing industry and surveillance technologies, which are some of the areas we are prioritising regarding data privacy.”

The sale of personal information is another data privacy contravention the regulator has observed of late. This, it says, has led to it working closely with the National Credit Regulator on reported cases.

Launched by OpenAI in November 2022, the text-based ChatGPT has the ability to interact in conversational dialogue form and provide responses that can appear human. It can also draft prose, poetry or even computer code on command.

It is built on top of OpenAI’s GPT-3 family of large language models, and is fine-tuned with supervised and reinforcement learning techniques.

On 30 March, the European Consumer Organisation launched an appeal, calling on all authorities to investigate the harm that can be caused by all AI chatbots.

The call came after the European consumer watchdog was notified of a complaint filed with the US Federal Trade Commission by the Centre for Artificial Intelligence and Digital Policy, about the potential cyber security risks and privacy practices of ChatGPT and similar technologies.

Germany is among the European countries anticipated to follow in Italy's footsteps by banning ChatGPT.

The European Commission is currently applying its mind to the introduction of the world's first legislation on artificial intelligence, called the AI Act.

Last week, OpenAI co-founder Elon Musk and Apple cofounder Steve Wozniak were among hundreds of IT experts who called for these types of AI systems to be suspended , amid growing questions about how to keep them within the confines of human rights.

Jon Tullett, associate research director at IDC, says the greatest concern around ChatGPT and similar technologies stems from users sharing sensitive information with the service, in their prompts, and that information in turn potentially leaking, as happened with ChatGPT.

“More generally, it's about the data-handling governance around user-submitted data.South Africa's Protection of Personal Information Act is drafted to provide privacy governance compatible with global standards, notably Europe's GDPR.

“That already covers instances such as this one. There may be future issues which need attention as the impact of AI grows, in areas like copyright or identity theft. It's an area which policymakers around the world are watching closely; there's a lot of debate about the ethics and possible legal implications, so it's likely we'll see policy changing over time,” he comments.

ICT industry pundits have been raising concerns around the key challenges facing South African policy-makers in developing regulations that will govern Web 3.0 technologies, citing the lack of education and understanding of these innovations among the key hindrances.

...continued from page 6

made by experts to show how this new tech can help Africa reach its full potential,” noted Ngcebetsha.

The experts said regulation of the application of AI is the best route to take if industries are to derive true value from the emerging tech. This would be particularly beneficial in proactive engagements, like enhancing current intellectual property and content laws – making it easier for organisations to adhere to compliance requirements.

While such emerging technologies are expected to unlock infinite business opportunities across sectors, gaining in-depth understanding of the data privacy implications is an important step to the regulation of technologies such as ChatGPT and other AI-based technologies, they added.

The regulation of ChatGPT has been in the spotlight since March, when Italy became the first country to ban it, saying the chatbot unlawfully collects personal data – breaching the country’s data privacy rules.

South Africa’s Information Regulator says it is holding internal discussions on how to approach the regulation of ChatGPT and other AI technologies, to ensure they don’t violate data privacy laws.

Responsible AI

Also speaking during the Africa Tech Week discussion, Lavina Ramkissoon, AI ethics and technology policy expert at the African Union, pointed out there is nothing wrong with a limited level of bias in the AI machine. The idea of perfection is something we will never get to, and we should not want to get to.

“If we as humans are creating AI to be the next upgrade to humanity or be human-like, then it makes no sense to completely eradicate the bias,” said Ramkissoon.

A combination of regulation, digital rights and a consistent framework to ensure consent is present when uploading information on AI applications is necessary, she added.

“In Nigeria, there was an update to the IP law about four years ago, to include emerging tech like AI and blockchain. Those are the kinds of proactive engagements that need to happen in other countries, from a regulatory perspective. From a digital citizen perspective, there is a lot of awareness and education that needs to happen in this space.”

Professor Arthur Mutambara, executive director at the institute for the future of knowledge at the University of Johannesburg, said there should be more diversity in the development of AI.

“Let us experiment with more women, more young people, more black people and more Africans. If we don’t do that, there will be definite bias and discrimination in the products and the application of AI.

“We have a duty as regulators, government and the private sector to make sure we are not just users, but also participants in the construction of these machines,” explained Mutambara.

AI regulation will assist in closing the gaps previously created by the emergence of new tech between Africa and the rest of the world, he added.

The experts discussed the many positives that come with AI, such as helping the ordinary digital citizen navigate the world of tech, solve issues surrounding healthcare, address social issues such as global warming, and create new and better jobs.

Although the benefits of AI are endless, regulations must be in place to ensure there is no bias, discrimination and marginalisation towards African citizens who are not yet in the digital space.

The experts noted that responsible AI and the composition of teams that build the systems are important factors to consider.

Keneilwe Gwabeni, CIO of Telkom consumer and small business, said the South African government has to invest more in the regulation of AI as a whole. “Government, the private sector and citizens must work together to make AI work for us. We have to make sure we are active players, and collaborate and drive African stories and experiences into the technology that we build.”

9 2ND QUARTER 2023 | SYNAPSE REGULATION FEATURE

GEN ERATI V E AI & THE POPI ACT

/ Read original article here /

As businesses increasingly adopt AI technologies, privacy and data protection implications have become more pronounced. For example, in South Africa, POPIA has established strict guidelines for collecting, storing, and using personal information.

As a result, employers must know how their employees’ use of generative AI may put them in breach of POPIA.

In this post, we’ll explore how your employees’ use of generative AI technology can put you in breach of POPIA and what you can do to mitigate those risks.

How employees are using generative AI

Employees can use generative AI, like OpenAI’s ChatGPT, to generate content such as emails, reports, and social media posts.

However, this technology requires access to data, including personal information, to train the algorithms that power it and produce the outcomes the employee seeks—e.g., a quick email.

POPIA and personal information protection

Employees may inadvertently put you in breach of POPIA if they input personal information into generative AI without the necessary consent or authorisation.

Risks associated with employees’ use of generative AI

You are responsible for ensuring that your employees know the risks of generative AI and the importance of complying with POPIA. Your responsibilities include providing training on properly using and protecting personal information and implementing policies and procedures that ensure compliance with POPIA. You should also conduct regular systems audits to ensure that employees use generative AI appropriately and are not putting the company at risk.

Employer responsibility and liability

It’s also crucial to note that under POPIA, employers are ultimately responsible for protecting personal information, even if an employee causes the breach.

Therefore, by implication, you may be liable for any damages resulting from a breach of personal information caused by an employee’s use of generative AI.

Proactive measures for compliance with POPIA

To avoid breaches of POPIA, you should take measures such as:

1. Implementing strict policies and procedures for the use of generative AI and personal information

2. Providing comprehensive training to employees on POPIA compliance and the use of generative AI

3. Conducting regular audits of the systems and data to ensure compliance

4. Establishing clear lines of responsibility for the protection of personal information

5. Ensuring that employees have the necessary authorisation or consent before using personal information with generative AI

Actions you can take next

 Protect your reputation by asking us to train your employees on using generative AI lawfully

 Set clear standards and guidelines for using generative AI by asking us to draft an AI policy for your organisation

 Manage the data protection risks of your AI projects by joining our data protection programme

11 2ND QUARTER 2023 | SYNAPSE
REGULATION FEATURE

There are currently no laws in South Africa specifically regulating artificial intelligence (AI). While the country may choose to use foreign legislation as the basis for drafting its own AI legislation, it will have to be adapted to meet local challenges.

Artificial intelligence (AI) has seen rapid growth in recent years. The release of ChatGPT in November 2022 and several other AI developments have created a frenzy where individuals and businesses are seeking to deploy and leverage AI in their everyday lives. However, the rate at which AI is being developed far exceeds the creation of AI regulations.

The rapid development and deployment of AI without regulation is cause for concern for many, including well-known technology experts such as Elon Musk and Steve Wozniak, who are among a long list of industry leaders and who signed a letter calling for a halt on AI research and development on 22 March 2023.

The purpose behind the letter was to institute a freeze on AI development for six months to allow for alignment on how to properly regulate AI tools before they become even more powerful and intelligent than they already are, and for purposes of providing legal tools and guidelines to mitigate the obvious risks associated with AI.

Many countries have already started establishing draft acts and legislation to regulate AI.

The European Union has taken a riskbased approach under the European Union AI Act and plans to classify AI tools into one of the identified risk categories, each of which

SOUTH AFRICA FACES MANY CHALLENGES IN REGULATING THE USE OF ARTIFICIAL INTELLIGENCE

/ Read original article here /

prescribes certain development and use requirements based on the allocated risk.

On 29 March 2023, the United Kingdom’s Department for Science, Innovation, and Technology published a white paper on AI regulation. The UK White Paper sets out five principles to guide the growth, development, and use of AI across sectors, namely:

 Principle 1: Safety, security, and robustness. This principle requires potential risks to be robustly and securely identified and managed;

 Principle 2: Appropriate transparency and explainability. Transparency requires that channels be created for communication and dissemination of information on the AI tool. The concept of explainability, as referred to in the UK White Paper, requires that people, to the extent possible, should have access to and be able to interpret the AI tool’s decision-making process;

 Principle 3: AI tools should treat their users fairly and not discriminate or lead to unfair outcomes;

 Principle 4: Accountability and governance. Measures must be deployed to ensure oversight over the AI tool and steps must be taken to ensure compliance with the principles set out in the UK White Paper; and

 Principle 5: Contestability and redress. Users of an AI tool should be entitled to

contest and seek redress against adverse decisions made by AI.

In South Africa, there are currently no laws regulating AI specifically. South Africa may choose to use foreign legislation as the basis for drafting its own AI legislation, but it is difficult to say at this early stage in the regulatory process.

Inasmuch as it may be beneficial for South Africa to base its AI regulatory framework on existing principles and legislation formulated by other countries, we suspect that South Africa will face the following challenges in respect of establishing AI regulations:

 Data privacy: AI tools process vast amounts of data and information, and the extent to which personal information (if any) is processed remains unknown. The unregulated use of AI tools could result in the personal information of data subjects being processed without their knowledge or consent, and lead to a situation where an organisation is in breach of its obligations under the Protection of Personal Information Act (Popia) if its employees are not trained on the acceptable use of AI tools;

...continues on page 15

12 SYNAPSE | 2ND QUARTER 2023 REGULATION FEATURE
Wonderful is unleashing the power of your data. Intel® Xeon® Scalable processors deliver industry leading, workload optimized performance through built-in AI acceleration, providing a seamless foundation to help speed data’s transformative impact, from the multi-cloud to the intelligent edge and back. Built-in AI acceleration is how wonderful gets done. Learn More at Intel.com/Xeon For more complete information about performance and benchmark results, visit www.intel.com/benchmarks. Intel, the Intel logo, and Xeon are trademarks of Intel Corporation or its subsidiaries. © Intel Corporation 2020

GLOBAL PERSPECTIVE

The era of Autonomous AI Agents

Artificial General Intelligence (AGI) is a form of hypothetical intelligence. This sort of intelligence is capable of learning to complete any intellectual work that human beings or animals can perform. Although this does not exist, yet, Autonomous AI Agents are often referred to as “primitive AGI”. The backing for this claim is that they possess the ability to reason, plan, think, remember, and learn on their own. This level of autonomy demonstrates the untapped power and flexibility of large language models (LLMs) like GPT-4 when wrapped in the right framework.

Businesses can reduce labour costs and increase revenue by automating tasks. These technologies can eliminate bad data points and reduce decision-making errors. Ultimately leading to efficiency gains. The cost reduction and efficiency can result in huge administrative cost savings ”

These task-driven self-guided agents can autonomously explore various topics, find requirements, create, and complete tasks, reprioritize their todo lists, and loop until they achieve their objectives. The result is a solution that has only one requirement, a goal. This means the user does not need to know technicalities to be able to be successful. Thereby, revolutionizing the way problemsolving is performed and tasks are handled.

Open-source project AI decision models like Auto-GPT, Microsoft’s Jarvis, AgentGPT, and BabyAGI have been making waves in AI communities, showcasing the true potential of this.

AI Agents’ skillset includes:

 Manage digital tasks: Autonomous agents can manage social media, make market investments, and even come up with original material.

 Usage of large language models (LLMs): Autonomous agents can analyse, summarize, and produce opinions or answers to challenging problems by making use of potent LLMs like GPT-4.

 The ability to browse the internet: These agents are capable of actively searching the web for pertinent information while keeping up with the most recent statistics and fashions.

 Memory management: Autonomous agents can effectively recall and use information from earlier tasks thanks to their capacity for both short-term and long-term memory.

 Manage your PC: Autonomous agents can access and manage files on your computer with the proper permissions, streamlining processes and boosting productivity.

 Adapt & evolve: These agents can change over time to better meet changing requirements through continual learning & feedback loops.

 Autonomous agents like Auto-GPT will lead to cost savings and increased productivity through automation.

14 SYNAPSE | 2ND QUARTER 2023
GLOBAL PERSPECTIVE

What can Auto-GPT do for you?

 The possibilities are endless and thrilling with Auto-GPT. You can easily automate processes, improve workflows, and optimize your projects with the aid of this powerful AI tool. Auto-GPT is the go-to autonomous AI agent for creating content, analysing data, or creating creative solutions. Here are some innovative applications for Auto-GPT:

 Content Creation: Task Auto-GPT to do product research, write captivating articles, make relevant social media posts, or create marketing material that appeals to your target market. It will also keep track of any other interesting accounts it finds. Saving you time and trouble.

 Idea Generation: Use Auto-GPT's creativity to come up with fresh concepts for new products, services, or marketing tactics.

 Task Management: Give Auto-GPT mundane jobs like email filtering, scheduling appointments, or document management to free up your time for more important duties.

 Develop Simple Websites: Simplify the web development process to make it quicker and more effective.

 Data analysis: By sorting through enormous datasets, seeing patterns and trends, and producing insightful reports, Auto-GPT enables you to make datadriven decisions.

 Business Optimization: Increase the effectiveness of your business operations by letting Auto-GPT analyse workflows, spot bottlenecks, and suggest fixes.

 Market research: Keep up with the latest market developments, business news, and consumer trends by using AutoGPT's web-browsing features.

 Code Debugging: To save time and ensure a smoother development process, let Auto-GPT evaluate your code, find mistakes, make tests, and recommend improvements.

 Plugins and Integration: ElevenLabs integration allows the AI to talk using an artificial intelligence-generated voice. Additionally, through generative integration with other systems (for example steady diffusion) output in various media types, such as images is possible.

Auto-GPT alternatives

The following AI projects may be a better fit for your needs, and some can even be accessed directly from your web browser. All that is required is the use of your ChatGPT API Key

Linkedin:

Deon is a sophisticated technical IT professional with a solid history of effectively bridging the gap between Programming, Security, Digital Forensics, Artificial Intelligence, and Teaching. His track record of over 24 years, has a footprint which spans major corporations, academic institutions, and government.

Microsoft Jarvis

Jarvis does not only match Auto-GPT but creates and understands images, including their depicted scenarios.

AgentGPT

AgentGPT will let you deploy Autonomous AI agents and have them embark to solve any goal imaginable. Try your hand at something

BabyAGI

The BabyAGI platform (built using Python, OpenAI, and Pinecone APIs) is inspired by the cognitive development of human infants and designed to test how well AI agents can learn and perform complex tasks in a limited and simplified environment.

Businesses can reduce labour costs and increase revenue by automating tasks. These technologies can eliminate bad data points and reduce decision-making errors. Ultimately leading to efficiency gains. The cost reduction and efficiency can result in huge administrative cost savings.

...continued from page 12

 Cyberattacks: AI tools are susceptible to cyberattacks and there is an immediate need for the enforcement of appropriate regulations to ensure that adequate security measures are imposed on the use of AI tools. Italy recently experienced a data breach on ChatGPT and subsequently imposed a temporary ban on the use of ChatGPT in Italy, as well as a ban on the processing of personal information by OpenAI. This is an example of the ramifications of deploying AI without having an adequate regulatory framework in place;

 Inequality and unemployment: South Africans are particularly concerned about AI tools automating jobs that would otherwise create job opportunities in the country, thus increasing the all-time low unemployment and poverty rates currently being experienced in South Africa. Our legislation will need to weigh up the advantages of the use of AI tools in the context of and against South Africa’s existing challenges and determine ways in which we can use AI tools to improve our current situation. Furthermore, the issue of data bias can lead to decisions that are not equitable and serve to perpetuate existing social injustices;

 Lack of understanding and awareness of AI: AI is technical, and the most common issue among rule-makers is the lack of understanding of how AI tools operate, and therefore how to safely and effectively regulate the use of such AI tools. Our rule-makers will need to consult and collaborate with technology experts to ensure that all risks are identified and addressed under South Africa’s AI laws and regulations;

 Inappropriate use: AI tools could be deployed for criminal purposes, such as money laundering, fraud and corruption, or otherwise used to promote terrorist activities. Any AI laws and regulations that are established for South Africa will need to align with the existing legislation that is currently regulating such criminal behaviour, to avoid further risks and a rise in criminal activity; and

 Accountability and recourse: South Africa’s AI laws and regulations will need to be clear in respect of accountability, and provide guidelines to assist in determining who would be held accountable for adverse decisions generated by AI tools, as well as the escalation procedure for appealing or contesting an adverse AI decision.

The future of AI regulation in South Africa is unclear at this stage, however AI tools, just like any new technological developments, present real risks that should be mitigated through laws and regulations. For now, users of AI tools should be aware of the associated risks and take steps to protect themselves against those risks.

15 2ND QUARTER 2023 | SYNAPSE
Deon van Zyl (Norway) BCom (Hons), Senior System Developer deonvanzyl
REGULATION FEATURE

THE HARD POWER ECONOMICS OF AI FOR SOUTH AFRICA

OpenAI recently introduced the latest in its GPT series of large language (LLM) models to widespread interest. GPT stands for Generative Pre-Trained Transformer. Generative describes a class of statistical models that can generate new data instances. This could be a single pixel to a full image, or a single word to a complete paragraph. Pre-Trained refers to the model parameters (the models’ weights and biases) being fine-tuned following many training runs on the training data. Transformer refers to the model architecture first described in a 2017 paper from Google.

Transformers use an attention mechanism to draw global dependencies between input and output, such as tracking relationships in sequential data like the words in a sentence. Nowadays people use transformers every time they search on Google or Bing.

In 2021 Stanford researchers termed transformers ‘foundation models’ – a description of models trained on broad data at scale, that can be fine turned to a wide range of downstream tasks. OpenAI’s GPT-3 released in June 2020, was 175 billion parameters, trained on 570GB of filtered text data, and used 10,000 GPUs (Graphics Processing Unit - chips used for AI) for its training. In October 2021, Microsoft and NVIDIA announced the Megatron-Turing Natural Language Generation model with 530 billion parameters. At the time the world’s largest and most powerful generative language model trained on a combination of 15 datasets and a training process that required thousands of GPUs running for weeks.

While the GPT series of LLM’s is what most people may be familiar with, AI has brought about a significant impact on science where it has gone from modelling language to modelling the relationships between atoms in real-world molecules. Large models built on large datasets with large scale accelerated computing, transformers are used to make accurate predictions and generate new data instances that drive their wider use, a virtuous cycle generating more data that can be used to create ever better models. Scientific fields from drug design to materials discovery are being catalysed by these massive data sets and powerful models. For example, NVIDIA and the University of Florida’s academic health center collaborated to create a transformer model named GatorTron to extract insights from large volumes of clinical data to accelerate medical research. Generative models are being used to generate morerobust synthetic controls for clinical trials using data from different modalities in cases in which patient recruitment or retention is difficult.

London based DeepMind developed a transformer called AlphaFold2 which processes amino acid chains like text strings to accurately predict protein structure.

The curation of open-source protein and metagenomics data sets were pivotal in enabling the training of AlphaFold, which in turn enabled new data sets of predicted protein structures for almost all known proteins. The work is now being used to advance drug discovery with DeepMind establishing a sister company, Isomorphic Labs to pursue it. Using AlphaFold together with an AI-powered drug discovery platform researchers led by the University of Toronto Acceleration Consortium were able to design and synthesize a potential drug to treat hepatocellular carcinoma (HCC), the most common type of primary liver cancer. Traditionally drug discovery has relied on trial-and-error methods of chemistry that in comparison to AI driven methods are slow, expensive and limit the scope of exploration of new medicines. In the case of the potential HCC drug, it took just 30 days from target selection and after only synthesizing seven compounds. These AI-powered drug discovery platforms work in concert with self-driving laboratories, an emerging technology that combines AI, lab automation, and advanced computing.

In South Africa health is an obvious example of a significant economic and societal opportunity where the technology needs to serve as the centrepiece of a new healthcare model, to drive down costs, increase access, and radically improve outcomes. The application of AI in this context will provide predictive, preventative, personalised care and will help to reduce demand. Data is critical to unlocking the benefits of AI, with large, diverse and multimodal data (i.e. radiology, digital pathology) required to transition from the research setting into everyday clinical practice. However, patient, and other important data are stored in silos, across different hospitals, universities, companies, and research centres. This is not an area in which the government can sit on the sidelines. A major reset in thinking is required.

To date publicly available data has been responsible for advances in AI. For example, Common Crawl enabled progress

16 SYNAPSE | 2ND QUARTER 2023
IN CONVERSATION WITH...

in language models, and ImageNet drove progress in computer-vision. These data sets were relatively cheap to produce – in the region of hundreds of thousands of dollars – but have generated spillover value into the billions of dollars. AI has worked particularly well in pathology (cancer/MRI images) and some experts consider it better than trained doctors. In the UK, Biobank is helping to produce the world’s largest organ-imaging data set, based on 60,000 participants to assess disease progression, and in Europe the European Health Data Space (EHDS) has been established to help create standards, improve interoperability, and allow access to data for research.

For South Africa it is now a once-ina-generation opportunity to revolutionise health with the implementation of a single national health infrastructure that brings together data into a world-leading system. This would facilitate a single centralised electronic health records (EHR) system. It would ensure a common standard of EHR’s across the health system, common data collection, data standards and interoperability, ensuring the benefits of a connected data system to be fully realised. A single database would also be able to connect more easily with external systems to share data in trusted research environments, platforms providing data from devices such as wearables, and clinical-trialsmanagement systems.

Operationally the data management platform behind the EHR would cost less than 2 billion rand, be operational within a year, with rollout across the entire healthcare system taking place inside three years. The spillover would be significant. In 2019, Ernst

& Young estimated that unlocking NHS health data could be worth up to £10 billion per annum through operational efficiencies, improved patient outcomes and wider economic benefits. In addition, collaborating with life sciences firms to access national health data, where appropriate, would generate funding for greater investment in public research and development.

AI offers the opportunity to catalyse a radically different future for South Africa, a national purpose that is bold and optimistic, that embraces technology to restore science and research, help citizens live longer, healthier lives and create new industries with meaningful employment. However, achieving this economic transformation requires a generational change in how work and innovation takes place. Capturing the opportunity in AI is a marathon, not a sprint. The winners are those who can effectively frame problems as AI problems, combine engineering with building the requisite hardware, software, and data architecture, to drive innovation.

The magnitude of the undertaking to transform South Africa from a consumer of AI to a producer cannot be overstated. OpenAI's GPT-4 and successors are being trained on tens of thousands of the highest specification GPUs for months on end. There are less than 500 such top specification GPU’s on the African continent, meaning that to train a single model, a private lab in California is using at least 50 times the total compute capacity available on the entire African continent. To help countries plan for AI compute capacity the OECD has published the first blueprint on AI compute Canada and the United Kingdom have also

begun needs assessments for compute infrastructure more broadly, but planning for specialised AI compute needs across the economy remains a major policy gap with South Africa having not undertaken any such assessment.

A recent report by the UK Government Office for Science noted that many smaller research centres and businesses in the UK were having difficulty gaining access to large-scale computing platforms, which curtailed the scope of their AI development. Likewise, a 2020 study found that an increased need for specialised computational infrastructure and engineering can result in ‘haves and have-nots’ in a scientific field. “We contend that the rise of deep learning increases the importance of compute and data drastically, which, in turn, heightens the barriers of entry by increasing the costs of knowledge production.” the paper reads.

According to the November 2022 Top500 list, there are only 34 countries in the world with a “top supercomputer”. While the list does not capture all the top systems in the world it does serve as a proxy for the countries with capability and those without. It is primarily the top 50 systems on the list which have real capability from an AI and science standpoint and that are used predominately for such. Apart from leading countries (the US, China, countries from the EU27, the UK, and Japan), the rest of the world makes up 12% of the supercomputers on the list, with countries from the Global South sparsely represented, and South Africa completely absent.

Consider Anton 3, a supercomputer specially designed for atomic-level simulation of molecules relevant to biology

Frontier’s exascale capability was leveraged by researchers at the US Department of Energy’s Oak Ridge National Laboratory to perform a largescale scan of biomedical literature in order to find potential links among symptoms, diseases, conditions and treatments, understanding connections between different conditions and potentially leading to new treatments. The system able to search more than 7 million data points from 18 million medical publications in only 11.7 minutes. The study identified four sets of paths for further investigation through clinical trials.

Image credit: Oak Ridge National Laboratory

...continues on page 19

17 2ND QUARTER 2023 | SYNAPSE
IN CONVERSATION WITH...

LEADING THE AI AND CV EDUCATION REVOLUTION IN AFRICA BY AUGMENTED STARTUPS

IIn the dynamic world of technology, the advent of Artificial Intelligence (AI) and Computer Vision (CV) has signaled an era of unprecedented innovation and possibilities. These rapidly growing areas are crucial for solving difficult issues and driving significant changes in various industries – from enabling contactless detection of vital signs in healthcare and determining the optimal time for crop harvest in agriculture to enhancing performance tracking in sports and streamlining quality inspection processes in manufacturing. However, for AI and CV to reach their full potential, we need to cultivate a generation of innovators equipped with the necessary skills and knowledge. This is precisely where Augmented Startups, a South African-based AI education provider, with a global reach, takes center stage.

Augmented Startups boasts a rich educational offering of AI, ChatGPT, Drone Swarms, and CV courses. Their unique approach to learning seeks to bridge the gap between evolving technologies and their practical applications. In this way, Augmented Startups provides a wellrounded learning experience that blends theoretical understanding with practical application.

Augmented Startups strives to keep the curriculum and content up to date, ensuring that students stay abreast of the latest developments in AI and CV. These comprehensive courses are grounded in real-world problem-solving, enabling

students to apply what they learn to tackle complex problems using AI. For instance, their recently developed ChatGPT-based AI, designed to mimic Elon Musk, demonstrates to students, the unique opportunity to learn from AI in a live conversational setting, while also highlighting the potential and the dangers of such technology. Ritesh Kanjee, the director of Augmented Startups found the conversation with the AI insightful while also discovering, for the first time the SpaceX Raptor Engine.

Beyond the classroom, Augmented Startups offers students the opportunity to participate in simulated real-world industrial projects. These immersive experiences which have a hint of theory are designed to prepare students for future careers in AI, Large Language Models (LLMs), and CV. One of the most innovative aspects of the

Augmented Startups experience is their AI & Drone Building Workshops. These sessions allow students to construct Gesturecontrolled drones from scratch, expanding their skillset and introducing them to the exciting possibilities of drone technology.

The impact of Augmented Startups extends beyond individual learners. The education provider has established a collaboration with the University of Johannesburg and is actively seeking to expand its reach to other top educational institutions across Africa. Such collaborations not only enhance the curriculum of these institutions with advanced AI and CV courses but also increase their prestige and competitive edge.

The benefits of incorporating AI, LLMs, and CV into the curriculum are manifold. For students, this translates into gaining

18 SYNAPSE | 2ND QUARTER 2023
THOUGHT LEADERSHIP

cutting-edge knowledge, acquiring real-world problem-solving skills, enhancing their employability in high-demand sectors, and fostering an innovative and entrepreneurial mindset. For institutions, it provides an opportunity to stay at the forefront of technological advancement and attract more students interested in AI, CV and beyond.

For those interested in exploring the world of AI and CV, Augmented Startups offers a wealth of resources on their website, www.augmentedstartups.com. Here, you can find a range of courses tailored to different learning needs. Augmented Startups can provide bespoke training solutions to empower corporates and students at educational institutions with the skills to navigate the AI-driven world.

To further democratize access to AI education, Augmented Startups has recently achieved a community of over 100,000 subscribers on their YouTube channel, where they regularly post video tutorials. It's not just a space for learning; it's a platform for innovation that also fosters a community that learns, shares, and grows together. Brands looking to reach this engaged and techsavvy audience can also explore sponsorship opportunities with Augmented Startups.

As we stand on the cusp of a new era powered by AI, it is initiatives like those pursued by Augmented Startups that will play a critical role in shaping the future. By empowering learners with the right tools and knowledge, they are not just educating; they are paving the way for the innovators of tomorrow.

Ritesh predicts that a fusion of Large Language Models (LLMs), Multimodal AI, and robotics will redefine Africa's technological trajectory. He envisions a future where this convergence creates intuitive robotic systems that can interpret diverse data inputs, resulting in groundbreaking applications. These could range from AI-driven robots acting as personal assistants or rather on-demand advisors, understanding and responding to complex tasks, to advanced systems revolutionizing the entertainment industry with immersive, interactive experiences. This fusion represents a seismic shift that could place Africa at the forefront of a future where technology becomes more accessible, intuitive, and impactful.

Contact Info: Ritesh Kanjee | Director of Augmented Startups +27744769790

rkanjee@augmentedstartups.com

...continued from page 17

(e.g., DNA, proteins, and drug molecules) and used for drug discovery. The system developed on the side-line of D.E Shaws primary hedge fund business is not listed in the Top500 but would easily rank in the top 50, sits far beyond anything on the African continent. Put differently, without such systems and engineering in South Africa, how are researchers and institutions expected to undertake modern drug discovery? They cannot, and this extends to all fields now driven by AI from climate to materials.

As Jack Clark co-founder of Anthropic recently pointed out, beyond the technological aspects GPT-4 should be viewed as the rendering of hard power economics in computational form. A capable data transformation engine and knowledge worker whose engineering and parameterisation is controlled and owned by a single private company. It is indicative of how contemporary AI research is very expensive and that these operations should be thought of more as capital-intensive factory style businesses than SaaS (Software-as-a-Service) companies. For example, Adept an AI startup, is training large-scale generative models to take actions on computers. Through a set of English commands you can imagine Adept taking data from one application and loading it into another, or carrying out multi-step actions in a spreadsheet. Adept raised $350 million dollars in a Series B funding earlier this year.

AI is unquestionably going to have a bearing on economic life and cause societal changes – an obvious example is irrevocable changes in how education works that OpenAI’s ChatGPT has already led to. In South Africa a paradigm shift is required that tourism and natural resources are not an economic future for the country, but a collective belief that technology is crucial to building wealth. South Africa reaping the economic rewards from AI will not magically fall into place, nor will the country simply over time evolve into a technology producer. Rather, OpenAI’s GPT-4 is an alarm bell that SA is falling so far behind that it risks never being able to catchup. Without retooling to capture the economic gains, AI will accentuate the ‘haves and have-nots’. According to the World Bank at last count the number of poor people in Sub-Saharan Africa had risen, and South Africa’s real GDP growth is projected to decelerate sharply to 0.1 percent in 2023, based on the latest estimates by the IMF

While there are many challenges to confront, we need not be pessimistic about our future. Ahead of us is the opportunity to set a future for South Africa – it’s an opportunity we must seize. The South African people need a new national purpose – one that is bold, optimistic, and embraces technology. Identifying technological opportunities presented by AI, such as those in healthcare with EHR, and expanding access in an appropriate and safe way, South Africa could establish a competitive edge over other health systems, providing invaluable data sets that could drive the needed progress in life sciences to deliver novel diagnostics and treatments. Ultimately it is that which sits beyond AI - the principles of ambition, invention and compassion that characterise our collective spirit – that we now need to summon to drive lasting impact and a future for the country.

19 2ND QUARTER 2023 | SYNAPSE
The future is AI. Are you ready to embrace it?
IN CONVERSATION WITH...
Gregg Barrett is the CEO of Cirrus, Africa’s AI initiative. He is member of The OECD.AI Expert Group on AI Compute and Climate.

AI DIGITAL COURSE TARGETS COMMONWEALTH PUBLIC LEADERS

/ Read original article here /

in digital access and skills across the Commonwealth.

“This course is a new and important milestone achievement, which the Commonwealth has developed for our member countries in close collaboration with Intel.

“This self-paced online course on digital readiness for public sector officials provides a unique opportunity for the public sector workers and leaders in member states to be trained in the fundamentals of artificial intelligence and machine learning.

“The course will lay the foundations for trust in these technologies, and confidence in our capability to use them effectively and responsibly. It will help to advance our work towards a forward-thinking and fast-acting Commonwealth family.

Commonwealth secretary-general Patricia Scotland.

The Commonwealth, in partnership with global chipmaker Intel, on Friday unveiled an online learning platform aimed at helping public sector leaders come to grips with artificial intelligence (AI).

The Commonwealth is an intergovernmental association of 56 member states, including SA, mostly former territories of the British Empire.

Named “Digital readiness for public sector leaders”, the online digital training course aims to demystify AI among senior officials across the Commonwealth and raise awareness of its potential applications in various sectors.

The course covers topics such as digital governance, technology, infrastructure and inclusivity.

It contains use-case examples, international best practices and frameworks that allow participants to develop strategies, scalable solutions and action plans for digital transformation in their communities.

The course is available in over 120 languages and is responsive to diverse abilities, such as visually impaired, dyslexia and ADHD. Once completed, participants will receive certification from Intel.

Professor Luis Franceschi, assistant secretary-general of the Commonwealth, speaking ahead of the launch, said the new tool is expected to do a “great deal of good” across the Commonwealth.

“The possibility for each member state to benefit from the development opportunities presented by new technologies is a key goal of the secretarygeneral,” stated Franceschi.

“Public sector leaders are at the helm of ensuring access to services like hospitals, schools, universities, etc. The public sector is responsible for delivering and it’s only logical that the private sector joins in to help with that delivery.”

Commonwealth secretary-general Patricia Scotland explained the launch follows a mandate set by the heads of government of the 56-member organisation, to address the divides

“It will also help us to deploy digital skills across every single sector.”

Referencing statistics, Scotland revealed that AI global funding doubled to $66.8 billion in 2021, with 65 AI companies reaching valuation of over a billion dollars.

“As the technological revolution unfolds, AI and machine learning have become indispensable, not only for public sector leaders but in the private sector too.

“Therefore, it’s vital that our Commonwealth member countries have the tools they need to maximise its value, not only to improve governance and economic opportunities, but to build a brighter, smarter future for everyone.

“National digital readiness demands that we build the necessary competencies, fostering knowledge and confidence in capabilities of transformative technology.”

Speaking at the event via videostream, Sarah Kemp, Intel VP and GM for international government affairs, stated: “Digitisation drives benefits for governments and their citizens, including GDP growth, job creation, social inclusion, along with improvement of services, as well as governance, with increased participation, more transparency and efficiency.

“For countries to remain competitive in the global economy, it is important for them to invest in expanding digital readiness for all, and upskilling current and future workforces for an AI-ready world.”

20 SYNAPSE | 2ND QUARTER 2023
TRAINING NEWS
21 2ND QUARTER 2023 | SYNAPSE

SPOTLIGHT ON AFRICA'S MOST PROLIFIC TECH INVESTOR

/ Read original article here / Launch Africa's $31m+ deployed through 133 investments in ~2.5 years make it the most active dealmaker on the continent

Following our focus on the most active investors in Africa back in February - our most-read article to date -, we thought it was worth digging a little more into Launch Africa, the most active investor on the continent. Their numbers speak for themselves really: since their launch in mid-2020, they have invested over $31m through 133 deals, at a rate of more than a deal a week on average. All but 4 of these deals (97%) were between $100k et $300k, with a median cheque of $250k. Three quarters were in the $200k-$300k range.

by Kenya and Egypt. Five other markets attracted more than $1m+ from Fund 1: Ghana, Senegal and Côte d’Ivoire in West Africa, Tanzania, and Tunisia. The investment team also went off the beaten tracks, identifying investments in often-overlooked countries such as Togo, Sudan or Angola. Unsurprisingly, Fintech is the sector Launch Africa has been investing the most in through Fund 1, with 42 deals (32%) totalling $11m (36%) across 13 markets in total, and including 13 fintech transactions in Nigeria alone. However four other sectors have also seen significant deal activity (>10%, i.e.

A core feature of the Launch Africa portfolio is therefore its great level of diversification. There is significant depth and breadth in how the portfolio has been constructed, resulting in coverage across multiple geographies, sectors and verticals. The top 4 sectors by invested value make up 70% of the portfolio, with the remaining 30% spread across 11 other sectors. Yet within the top 4 sectors multiple business models are represented along the different value chains, ensuring that no one use case dominates. For instance, there are 6 verticals within the Fintech sector, including credit (both B2C & B2B), remittances, payment, digital banking (both enterprise & consumer) and financial infrastructure (APIs). Geographically as well, the portfolio covers various verticals in each of the portfolio sectors. In Francophone Africa as an example, the ~$4m invested in the region have been allocated to startups in logistics marketplace, P2P APIs, health insuretech, agri marketplace or e-commerce. As a result, there is now a very healthy portfolio of companies for laterstage investors to look into for follow-on investments.

Geographically speaking, Launch Africa invested in start-ups spread across 22 markets; in more than half of them (12/22), they were involved in multiple deals. The ‘Big Four’ represent two thirds of the deals and capital invested through Fund 1 (89 deals, $21m); Nigeria and South Africa are neck and neck, followed at a distance

$3m-$4m and 15-20 deals): Marketplaces, Logistics, Big Data and HealthTech. Deals in these categories were spread across ~10 markets each, except for HealthTech with deals spanning ‘only’ 5 markets, including 7 deals in South Africa (the second largest country/sector combination after Nigeria/ Fintech).

What’s next? Launch Africa is currently raising its second seed stage fund in order to service the ever-increasing need for capital in the African start-up space. The competition for this funding will certainly be tough: for the current fund, they received over 2,000 pitches! With this new fund, potential LPs will once again have the opportunity to work alongside one of the continent's most prolific venture capital funds.

22 SYNAPSE | 2ND QUARTER 2023
INVESTMENT INSIGHTS
23 2ND QUARTER 2023 | SYNAPSE Build and deploy AI projects, with zero code, in 2-4 weeks. : cyborgintell.com : support@cyborgintell.com : @CyborgIntell : CyborgIntell Our disruptive “iTuring” platform is an innovative ZERO CODE AI driven Data Science and Machine Learning software that enables enterprises with CI’s ZERO CODE plug-and play platform is designed to start providing results instantly.

FROM TEXT PREDICTION TO CONSCIOUS MACHINES: COULD GPT MODELS BECOME AGIS?

/ Read original article here /

Welcome to the world of AGIs, Artificial General Intelligence, which refers to AI systems as smart as humans, or greater. The quest for AGI has been a long-standing goal of the AI community, and recent advances in generative models have led to an increasing interest in their potential for achieving AGI.

Image via www.vpnsrus.com

Generative Models

A generative model is a type of machine learning model that is able to generate data samples, based on the large volumes of data that it is trained on (mostly text but now increasing images as well). They can learn the patterns and structures of language from a large corpus of text and then generate new text that is coherent and follows the same patterns.

The breakthrough here was to use “Transformers” – a type of neural network introduced in 2017. (see Appendix)

Now, more interestingly…I asked GPT whether a Generative Model could generate something that was never in its training data ( important as we move to AGIs ). Here was its response:

24 SYNAPSE | 2ND QUARTER 2023
TECH INSIGHT
Picture this: a world where AI is not just a “chatbot” you interact with , but an entity that is responsible for decision making, scientific research and even guiding humanity forward.

The answer is that, by identifying patterns and data structure, it may be able to, but will struggle to generate anything vastly different from the training data, and nothing “completely novel”. This is where we are currently.

A mirror into your civilization...

If I had to describe GPT in an exciting way:

 Currently, Generative Models are a mirror of your civilization at a point in time, an automated and efficient record and reflection of everything you have done (once trained on everything)

 Given that humans have to generate science, art and culture over many years, and then train the models on the entirety of this, there is a huge dependency – for these Generative models to have any value, the content on which the model is trained needs to be created first.

 These models may seem “smarter” than a human, but that is because they can access information generated by human culture and civilization instantly, whereas the humans, whose creativity came up with everything in the first place, cannot.....

The Two Planets example

To further illustrate, I have come up with the “two planets” example. In this scenario, imagine a duplicate civilization that has evolved to the same level as the Earth-Human civilization, say on Proxima Centauri. They would potentially have similar cultural achievements, and the same level of technology, although their language, appearance, etc could be different. At the same time as Earth, they develop Generative models and their version of GPT4.

If we queried the Earth GPT4 model about anything on Proxima Centauri, it would know nothing...

Of course, the reciprocal would apply as well. Even a more advanced model, a GPT 5 or 6, would have the same limitations, as it was not trained on any data from that planet. Would you still consider it “intelligence” ?

How useful would GPT be in this scenario? Well, if the aliens came here, they could use the Earth GPT4 to learn all about our planet, culture and achievements, assuming they quickly learned one of our languages that the GPT model is also familiar with. However, what was once being spoken of as an “AGI” may not be considered as such in this example.

What would be truly impressive is if the Earth GPT4 could understand images or pass an IQ test from the hypothetical Proxima Centauri civilization…

Innate Intelligence

It is still amazing to me that GPT4 has an understanding of patterns, relationships in images, and the ability to pass a simple IQ test. Yes, it was trained by its creators to do this, based on data of mankind’s history, but once trained it has this ability.

This brings us to the definition of Intelligence itself, and the concept of innate intelligence.

“Every living being has some level of innate intelligence”

While factors such as education, socioeconomic status, and cultural experiences can impact cognitive development and, in turn, influence IQ test performance in humans, these are not the only factors that influence intelligence. Genetics, neurological factors, and individual differences in learning capability also play a role. Therefore, there is an innate intelligence in every living being that plays a large part in the resulting visible intelligence demonstrated in the real world.

How do we create an artificial intelligence with some level of innate intelligence?

Remember, if a human baby grows up in a culture different to his/her parents, learning a different language, the child still manages to learn quickly.

Consider this – if we go back to the Two Planets example, and we are confident that while the Earth GPT4, 5 or 6 will have no knowledge of culture, language, events etc from another civilization, but that it WILL manage to:

 Perform mathematical calculations that are constant in the universe

 Understand basic patterns that are constant in the universe

 Learn the basic structures of language that may be common in the universe

25 2ND QUARTER 2023 | SYNAPSE
TECH INSIGHT Licensed by Creative Commons
From the GPT4 whitepaper

 Thus...have the ability to potentially learn from ANOTHER civilization and their data

We then approach something very exciting… we could then argue that in creating these models which, admittedly had to be trained on all our data to start with, we are taking the first steps into creating something with a small amount of innate intelligence. And each subsequent model would then build on the previous in terms of capability, until…well, would iteration 7 or 8 be an AGI?

Are they AGIs?

At this point we need to be clear on what our definition of an AGI is, as we are finally moving in the direction closer to creating one. I believe that our definition has become muddied.

Does an AGI and the Singularity simply refer to any intelligence smarter than humans?

If we go back to the 1993 definition of the Singularity, as per the book “The Coming Technological Singularity” by Vernor Vinge, he spoke of “computers with super-human intelligence”. I could argue that GPT4 already is smarter than any human in terms of recalling knowledge, although it would be less capable in creativity, understanding and emotional intelligence.

He also talks about human civilization evolving to merge with this super intelligence. This hasn’t happened but brain interfaces have been built already. A brain interface into Chat GPT4 that would allow a human to call up all our civilizations knowledge instantly, turning him/her into a “super human”, is actually possible with today’s technology. It could be argued then that we have met the criterion already for the singularity by the 1993 definition…

The Singularity

If we move to futurist Ray Kurzweils definition of the Singularity, he spoke of “… when technological progress will accelerate so rapidly that it will lead to profound changes in human civilization…”.

Here, the year 2023 will certainly go down as the start of this change. The year 2023, due to the emergence of GPT3 and then GPT4, is a watershed year in technological history, like the launch of the PC or the internet.

Already there are conversations that I can only have with GPT4 that I cannot have with anyone else. The reason is that the humans around my may now be knowledgeable on particular topics, so I turn to GPT4. I sometimes do try to argue with it and present my opinion, and it responds with a counter argument.

By our previous definition of what an AGI could be, it can be argued that we have already achieved it or are very close with GPT4. We are now certainly on the road to AGI, but we now have to clearly define a roadmap for it. This isn’t binary anymore, as in something is either an AGI or not. More importantly, the fear mongering around AGIs will certainly not apply to all levels of AGI, once we clearly define these levels.

This is particularly important, as recently people such as Elon Musk have publicly called for a “pause” in the development of AGIs as they could be dangerous. While this is correct, this will also rob humanity of the great benefits that AIs will bring to society.

Surely if we create a roadmap for AGIs and identify which level would be dangerous and which will not, we could then proceed with the early levels while using more caution on the more advanced levels?

The AGI Roadmap

Below is a potential roadmap for AGIs with clearly defined stages.

 Level 1: Intelligent Machines - Intelligent machines can perform specific tasks at human-level or better, such as playing chess or diagnosing diseases. They can quickly access the total corpus of humanity’s scientific and cultural achievements and answer questions. Are we here already ?

 Level 2: Adaptive Minds - AGIs that can learn and adapt to new situations, improving their performance over time through experience and feedback. These would be similar to GPT4 but continue learning quickly post training.

 Level 3: Creative Geniuses - AGIs capable of generating original and valuable ideas, whether in science, art, or business. These AGIs build on the scientific and cultural achievements of humans. They start giving us different perspectives on science and the universe.

 Level 4: Empathic Companions - AGIs that can understand and respond to human emotions and needs, becoming trusted companions and helpers in daily life. This is the start of “emotion” in these intelligent models, however by this time they may be more than just models but start replicating the brain in electronic form.

 Level 5: Conscious Thinkers - AGIs that have subjective experiences, a sense of self, and the ability to reason about their own thoughts and feelings. This is where AGIs could get really unpredictable and potentially dangerous.

 Level 6: Universal Minds - AGIs that vastly surpass human intelligence in every aspect, with capabilities that we cannot fully define yet with our limited knowledge. These AGIs are what I imagined years ago, AGIs that could improve on our civilizations limitations, and derive the most efficient and advanced designs for just about anything based on the base principles of physics (ie. Operating at the highest level of knowledge in the universe).

As you can see, levels 1-3 may not pose much of a physical threat to humanity, while offering numerous benefits to society, therefore we could make an argument for continuing to develop this capability.

Levels 4-6 could pose a significant threat to humanity. It is my view that any work on a level 4-6 AGI should be performed on a space station on Moon base, to limit potential destruction on Earth. It is debatable whether the human civilization would be able to create a Level 6 AGI, even after 1000 years…

Universal Minds

Over the last few decades, I have been fascinated with the concept of an Advanced AGI, once that is more advanced than humans and that could thus rapidly expand our technological capability if we utilize it properly.

Here is an old blog post on my Blog from 2007 where I was speculating on the Singularity being near, after following people like Kurzweil.

Copyright: My Blog...

26 SYNAPSE | 2ND QUARTER 2023
TECH INSIGHT

What I always imagined was a “super intelligence” that understood the universe from base principles much better than we did, even if it was something we created. Imagine an intelligence that, once it gets to a certain point, facilitates its own growth exponentially. It would perform its own research and learning.

It would be logical that such an intelligence plays a role in the research function of humanity going forward.

It could take the knowledge given to it and develop scientific theory far more effectively than humans. Already today, for example, we see a lot of data used in Reinforcement Learning being simulated and generated by AI itself. And if the total data we have is limiting, we could then ask it to design better data collection tools for us, ie. Better telescopes, spacecraft, and quantum devices.

A simple example that I would use would be the computing infrastructure that we use, on which everything else is built.

Most computers today use what’s known as the “Von Neumann” architecture, which is shown below. The main drawback of this architecture is that data has to be constantly moved between CPU and memory, which causes latency.

Now while all this seems like convenient fantasy, I have used the above example to illustrate the great strides that a “Universal Mind” could help humanity make.

Another example would be to ask it to calculate how best and efficiently to “solve the world hunger problem” or “solve the climate problem”.

When you consider the above, it does seem like the benefits outweigh the problems, although the one problem that has always been brought up is that a Universal Mind may decide to destroy humanity. Hence I say, if we’re going to try to build it, take the necessary safeguards, over and above the Responsible AI safeguards that we have today, and consider building the AGI off world, say on a space station or moon base, with the ability to cut networking and power if needed.

Summary

Hold on to your seats!…

On top of this we typically use the x86 CPU, then Operating systems like Windows or Linux, then applications written in programming languages like C++.

Imagine if we could engineer an optimal, efficient computing architecture, from base principles, with orders of magnitude improvements at base architecture level, CPU level, OS level and application software level. This would be a difficult task for humans to undertake today, not just in terms of the actual technical design, but also in building it, and to collaborate on the next layer above, and to adopt the technology.

With a “super-intelligent” Universal AI, it would have the power to generate every layer at once.

It would also give us design schematics for the factories to build the new components, that too in the most quickest and efficient way.

Some GPT models have already shown an impressive ability to pass IQ tests and learn basic mathematics, hinting at the potential for developing a level of intelligence that goes beyond simple text generation. Now that we have an example of a roadmap for AGIs ( above ), we can certainly see how GPT is the start of the early phases in this roadmap, although more technical breakthroughs will be needed to eventually get to the later stages.

So buckle up and get ready – We don’t know where the end of this road is and where it will take us, but with GPT models now mainstream, we now know that as of 2023 we are at least on the road itself, travelling forward.

Appendix

Below is a great video that I found explaining what Transformers are:

27 2ND QUARTER 2023 | SYNAPSE TECH INSIGHT
“ Now while all this seems like convenient fantasy, I have used the above example to illustrate the great strides that a “Universal Mind” could help humanity make ”

Invest in Tshwane

South Africa’s Capital City, the City of Tshwane, is situated in the province of Gauteng, the economic centre of South Africa. As the seat of government, Tshwane is the country’s administrative hub and houses 134 embassies, 30 international organisations making it second only to Washington DC in terms of the concentration of the diplomatic and foreign missions. It is also home to over 30 Johannesburg Stock Exchange-listed companies as well as various multinational companies.

The city is home to four universities and various research institutes and its knowledge and information industry is well-developed. Tshwane has a high literacy rate, a large concentration of financial and business services in the region, support of educational institutions and communication infrastructure, including broadband capacity.

Why Tshwane?

Tshwane is the knowledge centre of South Africa. The City has a high concentration of academic, medical, social science, technology and scientific institutions which produces 90% of medical, science and technology research in the country and 60% of the country’s overall research output. The city has a student population of 60000 and high levels of literacy, giving investors access to a skilled workforce and continuous learning.

Your investment is safe with us, we are governed by investment protection legislation, The Protection of Investment Act 22 of 2015 which specifically gives foreign investors similar rights and protections available to South Africans.

We have great investment incentives such as the duty drawback schemes that provide refunds for import duties paid on the materials used in the production of goods that are re-exported.

There are no restrictions for foreign investors to acquire property in the country. There are no restrictions on foreign investors to acquire companies or businesses in South Africa.

Tshwane has a well-developed infrastructure and road network and is centrally situated on the national road network with direct links to Mozambique, Botswana and Namibia along the east-west N4 route, and with Zimbabwe along the south-north N1 route.

• • • • • • For more information Contact Us 012 358 9999 www.tshwane.gov.za www.teda.org.za www.facebook.com/CityOfTshwane Block B, 2nd Floor Tshwane House 320 Madiba Street Pretoria 0002 PO Box 440 Pretoria 0001

SA’S DATA, CLOUD BLUEPRINT IN THE WORKS, SAYS MINISTER

The South African government is in the process of finalising the National Data and Cloud Policy, says communications and digital technologies minister Mondli Gungubele.

Gungubele made the comments during a breakaway session on digital opportunities in SA, at yesterday’s South Africa Investment Conference in Sandton, Johannesburg. According to a statement, the state is looking to the policy to strengthen its capacity to deliver services to its citizens, ensure informed policy development based on data analytics, as well as promote the country’s data sovereignty and the security thereof.

/ Read original article here /

“We are excited about going forward. Google and Amazon Web Services have made huge commitments in terms of data investments and cloud service availability, which is going to help us focus with least cost on the innovation space on development of technologies,” states Gungubele.

The draft policy, which was published on 1 April 2021 for public comment, proposes to develop a state digital infrastructure company and high-performance computing and data processing centre.

It also aims to consolidate excess capacity of publicly-funded data centres and deliver processing, data facilities and cloud computing capacity.

South Africa is one of the continent’s mature cloud markets, with the country leading cloud adoption in the region.

In addition, hyperscalers have increased their investments in the local cloud computing space, establishing their data centre facilities in the country.

Despite the intensified and pervasive cloud adoption, studies show the lack of skills is among the main difficulties in this process.

Gungubele indicates government has ambitious targets, especially for the three targeted areas: digital connectivity, digital literacy and digital skills.

“We are targeting young people, women and SMMEs for these areas. We already have a huge number of people that are hungry for learning.

“We are working with universities and international communities to ensure there is intensification of innovations to promote capabilities as far as these industries are concerned. We are, among other things, already rolling out artificial intelligence hubs.”

“ We are targeting young people, women and SMMEs for these areas. We already have a huge number of people that are hungry for learning ”

29 2ND QUARTER 2023 | SYNAPSE
REGIONAL NEWS SA

KENYA: AI GUIDELINES FOR PRACTITIONERS LAUNCHED

/ Read original article here /

The artificial intelligence (AI) community in Kenya has developed and launched guidelines for practitioners in the country. The guidelines contain best practices for use of AI and help practitioners better understand the AI landscape in Kenya.

The guide, titled “Artificial Intelligence Practitioners’ Guide: Kenya,” was developed by a multisectoral team drawn from government, think tanks, academia, local and international non-governmental organizations, journalists, human rights advocates, civil society, the private sector, and legal experts, among others led by the Global Partnership for Sustainable Development Data and the Fair Forward programme implemented by GIZ Kenya.

The guide covers the building blocks of AI in Kenya, the principles of responsible AI, and the legal landscape for AI in the East African country. It highlights the critical building blocks of AI as AI infrastructure, data, capacity and skill, investments, and financing, and notes that Kenya is among the few countries in Africa with the necessary capacity to support AI adoption and usage.

Responsible AI refers to the practice of ensuring that AI is safe, reliable, and impartial. The guidelines provide recommendations for procuring, designing, building, using, protecting, consuming, and managing AI and other advanced analytics. This ensures that AI practitioners can create efficient AI systems, track and mitigate risks and biases in AI models, and ensure that the development process considers the ethical, legal, and societal implications of AI.

Lastly, it examines the legal landscape for AI in Kenya and lists various laws and guidelines touching on digital technologies and the adoption of AI in the country. For an AI practitioner, adherence to these laws is important as they will help them abide by the best practices and moreover, create products that are beneficial to society.

Davis Adieno, Senior Director of Programs Global Partnership of Sustainable Development Data, says that AI practitioners should find this guide useful for exploring AI technologies in the region to advance sustainable development and inclusive growth.

30 SYNAPSE | 2ND QUARTER 2023
The guide covers the building blocks of AI in Kenya, the principles of responsible AI, and the legal landscape for AI in Kenya
REGIONAL NEWS KENYA
“ Responsible AI refers to the practice of ensuring that AI is safe, reliable, and impartial. The guidelines provide recommendations for procuring, designing, building, using, protecting, consuming, and managing AI and other advanced analytics ”

NASA-FUNDED SCIENTIST USES EO IMAGERY AND AI TO IMPROVE AGRICULTURE IN UGANDA

/ Read original article here /

Ayooluwa Adetola is a writer and editor at Space in Africa. She loves to share scientific information using the simplest words possible. When she’s not in front of a screen, she can be found with her nose buried in a book.

Catherine Nakalembe, the Africa program director for NASA Harvest, leads efforts to map crop conditions and build early warning systems for weather events by developing tools like maps, dashboards, apps and radio to make satellite insights accessible and useful for local farmers and policy-makers across Eastern and Southern Africa.

The programme is executed with local partners, policymakers and researchers to develop tools best suited to the local farmers and increase agricultural production.

One of NASA Harvest’s projects is Helmets Labeling Crops, a ground data

collection effort underway in Kenya, Mali, Rwanda, Tanzania, and Uganda, which involves taking pictures of fields from cameras mounted on motorcycle helmets or cars. The ground data is then used to analyse satellite data to accurately assess food insecurity and climate change. A related NASA Harvest effort called Street2Sat transforms these images into large datasets of georeferenced labels, with information on location and crop type. This data trains algorithms to recognise specific crops like maize or sugarcane, parse the photos to predict which crops are shown, and then turn that data into crop type maps and other tools for individual farmers or national crop monitoring initiatives.

According to Catherine Nakalembe, Assistant Professor at the University of Maryland, more investment is needed to ensure partners across Africa can leverage earth observation, ground data, and artificial intelligence to improve food security, and despite growing interest in satellite imagery as a tool for addressing food security, there isn’t sufficient donor funding to ensure regions like East and southern Africa can benefit.

The NASA Harvest programme is a food security and agriculture project that monitors crops from space and uses a combination of satellite imagery and data from the ground to help farmers and policymakers on the continent make more informed decisions.

31 2ND QUARTER 2023 | SYNAPSE
REGIONAL NEWS UGANDA

ADVANCING YOUR WORLD

Our technologies can help close the inequality gaps in South Africa, including the education system.

In September izwe.ai was launched.

It is an AI platform developed by Telkom, in collaboration with Enlabeler, which transcribes and translates speech into text from English and local languages. izwe.ai aims to deliver local-language transcription and translation that gives all learners equal access to learning material.

This will also have a far-reaching impact on the health and business sectors, allowing for academic and legal transcription; contact centre transcription and analysis; and media production services.

Visit www.izwe.ai for more information.

The use of Artificial Intelligence forms a core part of Telkom’s commitment to technological advancement and digital transformation.

ver the past 2 decades, Rwanda, like many other countries, has witnessed a significant increase in cesarean deliveries. Nearly 16% of the nation’s newborns were delivered by the surgical procedure in 2020, up from about 2% in 2000, according to recent research. The rise has been fueled by improved maternal health services and increased access to affordable care, researchers say, but also greater demand for the procedure from more affluent patients.

Although the use of cesarean delivery reduces the risk of morbidity among mothers and babies, it also poses problems. In particular, the surgical wounds can become infected, leading to illness and even death. That risk is particularly acute in rural areas where medical care can be scarce.

CESAREAN DELIVERIES ARE RISING IN RWANDA –AI COULD REDUCE THE RISKS

Smartphone app helps health workers detect postsurgery infections

“With the increasing number of cesarean deliveries, you’re going to see more complications,” says Bethany Hedt-Gauthier, a biostatistician at Harvard Medical School. “It is important to monitor for those complications in a way that is feasible, acceptable, and safe for the mother.”

Now, Hedt-Gauthier is part of a research project field testing a mobile phone app that uses artificial intelligence (AI) to detect infections, potentially speeding treatment.

“The app is helping us assist the local community without the need to visit a health center,” says Laban Bikorimana, research coordinator at the Rwanda office of Partners in Health (PIH), a Boston-based nonprofit that is testing the app.

Research conducted at a hospital in Kirehe, a rural district in Rwanda’s Eastern province, has highlighted the infection threat. There, a 2019 study in which doctors examined women 10 days after cesarean delivery found about 11% had bacterial infections. By comparison, the infection rate is about 7% in more developed countries. The study, published in the British Journal of Surgery, found that Rwandans can find postoperative care—which includes monitoring infections and changing wound dressings—burdensome, in part because they must make long, costly trips to the hospital.

Community health workers in Kirehe can now use the mobile app— developed by an interdisciplinary team from the

Massachusetts Institute of Technology (MIT), Harvard, and PIH—to take a picture of surgical wounds. The software then uses computer-vision techniques and AI to detect signs of infection. Initial studies show that the app, which can be used without an internet connection, is able to diagnose infections with roughly 90% accuracy within 10 days of childbirth. Once a problem is recognized, the health worker provides the appropriate care or advises the patient visit a doctor.

Before the app, researchers had tested several strategies for addressing infections. They provided health workers with short questionnaires to help them identify problems, for example. But “identifying and monitoring postcesarean wounds is the responsibility of doctors, and teaching the community workers to do exactly the same thing was very challenging,” says Vincent Cubaka, a physician and director of research and training at PIH’s Rwanda office.

The team then explored automating the process, which came with its own challenges. First, researchers needed to collect highquality images of cesarean wounds to train the underlying algorithm. But variations in phone camera settings, lighting, and other conditions affected image quality. “The problem is you give me a camera and I will take a photo from one particular angle, but another photographer might use a different angle,” Hedt-Gauthier says.

/ Read original article here /

To create consistent images, the researchers deployed software that automatically scaled, color calibrated, cropped, and rotated the photos. “All the images were now exactly the same size, the same magnification, and square,” says MIT engineer Richard Fletcher. “That’s perfect data to use.”

The researchers are now improving the app so it can be used across more diverse populations such as in Ghana and parts of South America. “In Rwanda the homogeneity of the skin tones was fairly high,” but the current version doesn’t work well with people with lighter skins, Fletcher says. The team is now experimenting with using a thermal camera where the brightness of the image is a function of the skin temperature rather than skin color.

To avoid misuse of apps that use AI, Fletcher says doctors and clinicians should be informed about the data that were used to train the software. “Otherwise, I think there is a strong danger of AI models being used where they were not intended to,” Fletcher says. “Then you get bad results.”

Training local health workers to use the app can be a challenge, Bikorimana says. “Some of them had never even touched a smartphone.” Still, he sees promise. “I can see [it] being implemented throughout Rwanda.”

The Dalla Lana School of Public Health at the University of Toronto supported this reporting.

33 2ND QUARTER 2023 | SYNAPSE
A community health worker in the Kirehe district of Rwanda uses an artificial intelligence–powered app to see whether a surgical wound has become infected.MATT HEDT
REGIONAL NEWS RWANDA O

CAN AI HELP SOLVE DIPLOMATIC DISPUTE OVER THE GRAND ETHIOPIAN RENAISSANCE DAM?

/ Read original article here /

Ethiopia's hydropower dam on the Blue Nile River has angered downstream neighbors, especially Sudan, where people rely on the river for farming and other livelihoods. To reduce the risk of conflict, a group of scientists has used artificial intelligence, AI, to show how all could benefit. But getting Ethiopia, Sudan, and Egypt to agree on an AI solution could prove challenging, as Henry Wilkins reports from Khartoum, Sudan.

35 2ND QUARTER 2023 | SYNAPSE
REGIONAL NEWS ETHIOPIA

ENTERPRISE SOLUTIONS FOR END TO END TRANSCRIPTION SERVICES

Expert knowledge of contact center, medical & debt collection domains

An AI powered transcription & translation platform using ML and humans-in-theloop to deliver highly accurate outputs. Expert knowledge of contact center, medical & debt collection domains.

Our teams also specialise in speech analytics, speaker diarization, sentiment labeling and deliver performance dashboards per project.

POWERED BY &

EGYPT LAUNCHES CHARTER FOR THE RESPONSIBLE USE OF AI

/ Read original article here /

The National Council for Artificial Intelligence (AI) has launched a charter for the responsible use of AI. This AI charter considers these five main principles when developing and implementing AI; human-centered design, transparency, justice, accountability, and security.

Amr Talaat, Minister of Communications and Information Technology (MCIT) affirmed that Egypt is keen to implement its National Strategy for Artificial Intelligence in order to adapt artificial intelligence technologies to serve Egypt across all fields.

The minister also highlighted that the charter was launched to achieve two main goals. The first is to enable citizens to become acquainted with the governing frameworks for the responsible use of artificial intelligence and for all stakeholders to be aware of ethical considerations related to artificial intelligence and to integrate them into their plans.

The second is to highlight Egypt's readiness to follow practices of responsible artificial intelligence in all its aspects attracting investments in this field.

It also aims to improve Egypt’s ranking in indicators measuring the state’s readiness to invest in AI. The charter will also support developers who work in the AI field and are looking to develop or market their products in Egypt.

The MCIT minister added that this document will be reviewed annually to ensure its always up to date for developments in the field of AI.

SPENDING ON AI IN MIDDLE EAST AND AFRICA REGION TO SOAR TO $3 BILLION IN 2023

/ Read original article here /

The region is expected to experience the fastest growth in artificial intelligence spending worldwide, report finds

Spending on artificial intelligence in the Middle East and Africa region will jump to $3 billion this year — accounting for nearly 2 per cent of global AI spending of $151.4 billion, according to a new report by International Data Corporation.

The region is expected to experience the fastest growth rate worldwide over the coming years, with the US-based research company predicting that Middle East and Africa AI spending will surge at a compound annual growth rate of 29.7 per cent to reach $6.4 billion in 2026.

Rapid adoption of cloud solutions by different industries and accelerated digital transformation will drive AI spending over the coming years, the report found.

“Organisations across the region are investing in AI technologies and related software and services to drive greater efficiency through automation and contribute to a more agile

...continues on page 38

37 2ND QUARTER 2023 | SYNAPSE
INVESTMENT NEWS REGIONAL NEWS EGYPT
The global AI market is projected to surpass $1.7 trillion in 2030. Image: REUTERS

...continued from page 37

operating environment,” said Manish Ranjan, senior programme manager for software, cloud and IT services at IDC for the Middle East and Africa.

“The effects of the pandemic have fuelled further spending in relation to AI/ML [machine learning] adoption, particularly within the banking and finance, manufacturing, trade, health care and government verticals,” Mr Ranjan said.

The global AI market is projected to surpass $1.7 trillion in 2030, up from $93.5 billion in 2021, expanding at a compound annual growth rate of more than 38 per cent, data from Grand View Research indicates.

Banking, retail, and federal government will be the Middle East and Africa region's biggest spenders on AI this year, followed by manufacturing, according to IDC.

Together, these four industries will account for nearly 44 per cent of the region's total AI spending this year.

However, IDC expects professional services and transportation to be the fastest-growing industries in terms of AI spending over the 2022-2026 period, with annual growth rates of 36.4 per cent and 33.9 per cent, respectively.

Generative AI, one of the most disruptive offshoots of AI technology, holds immense potential, a recent report by Goldman Sachs found.

It uses machine learning to produce content such as text, images, video and audio, and can generate novel content, in the right context, instead of merely analysing or acting on existing data.

Generative AI could drive a 7 per cent (or almost $7 trillion) increase in the global economy and lift productivity growth by 1.5 percentage points over a 10-year period, Goldman Sachs said.

The global generative AI market is expected to reach $188.62 billion by 2032, growing at an annual rate of more than 36 per cent from $8.65 billion last year, according to a report by Brainy Insights.

The North America region dominated the market last year.

Overall, AI growth prospects in the Middle East and Africa region “look very promising as businesses are increasingly investing in AI- and analytics-based solutions to strengthen and expand their customer experiences, build digital capabilities and drive innovation,” Mr Ranjan said.

Augmented customer service agents, fraud analysis and investigation, improved threat intelligence and prevention systems, and sales process recommendation and expansion, are some of the key business use cases where regional organisations are investing more in the market.

However, the region will need more trained professionals to harness the technology, IDC said.

“Numerous challenges will accompany the region's increasing adoption of AI, with the most critical being the lack of skilled resources such as data scientists, data engineers and AI modellers,” Mr Ranjan said.

“However, the region has multiple initiatives in place aimed at upskilling local talent, with organisations in both the public and private sectors establishing partnerships to foster AI- and ML-specific learning,” he added.

38 SYNAPSE | 2ND QUARTER 2023
Guests entertained by a robot at Mohamed bin Zayed University of Artificial Intelligence in Abu Dhabi. Khushnum Bhandari / The National Military robots perform at the 16th edition of International Defence Exhibition and Conference in Abu Dhabi. EPA
INVESTMENT NEWS
DEVELOP WITH NVIDIA OMNIVERSE Start building custom tools and applications today. Watch this technical session here Explore developer resources, learn more

The European Union is considering farreaching legislation on artificial intelligence (AI).

The proposed Artificial Intelligence Act would classify AI systems by risk and mandate various development and use requirements.

European lawmakers are still debating the details, with many stressing the need to both foster AI innovation and protect the public.

The European Union (EU) is considering a new legal framework that aims to significantly bolster regulations on the development and use of artificial intelligence.

The proposed legislation, the Artificial Intelligence (AI) Act, focuses primarily on strengthening rules around data quality, transparency, human oversight and accountability. It also aims to address ethical questions and implementation challenges in various sectors ranging from healthcare and education to finance and energy.

“[AI] has been around for decades but has reached new capacities fueled by computing power,” Thierry Breton, the EU’s Commissioner for Internal Market, said in a statement. The Artificial Intelligence Act aims to “strengthen Europe's position as a global hub of excellence in AI from the lab to the market, ensure that AI in Europe respects our values and rules, and harness the potential of AI for industrial use.”

The cornerstone of the AI Act is a classification system that determines the

The proposed Artificial Intelligence Act would classify AI systems by risk and mandate various development and use requirements.

EU AI ACT EXPLAINED

/ Read original article here /

level of risk an AI technology could pose to the health and safety or fundamental rights of a person. The framework includes four risk tiers: unacceptable, high, limited and minimal.

AI systems with limited and minimal risk—like spam filters or video games—are allowed to be used with little requirements other than transparency obligations. Systems deemed to pose an unacceptable risk—like government social scoring and real-time biometric identification systems in public spaces—are prohibited with little exception.

On artificial intelligence, trust is a must, not a nice to have — Margrethe Vestager, Executive Vice-President for a Europe fit for the Digital Age ”

High-risk AI systems are permitted, but developers and users must adhere to regulations that require rigorous testing, proper documentation of data quality and an accountability framework that details human oversight. AI deemed high risk include autonomous vehicles, medical devices and critical infrastructure machinery, to name a few.

The proposed legislation also outlines regulations around so-called general purpose AI, which are AI systems that can be used for different purposes with varying degrees of risk. Such technologies include, for example, large language model generative AI systems like ChatGPT.

EU's Artificial Intelligence Act: for safely harnessing AI's full potential

“With this Act, the EU is taking the lead in attempting to make AI systems fit for the future we as human want,” said Kay Firth-Butterfield, the Head of AI at the World Economic Forum.

The Artificial Intelligence Act proposes steep non-compliance penalties. For companies, fines can reach up to €30 million or 6% of global income. Submitting false or misleading documentation to regulators can result in fines, too.

“With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted,” Margrethe Vestager, the Executive VicePresident for a Europe fit for the Digital Age, added in a statement. “Future-proof and innovation-friendly, our rules will intervene where strictly needed: when the safety and fundamental rights of EU citizens are at stake.”

...continues on page 45

40 SYNAPSE | 2ND QUARTER 2023
INTERNATIONAL NEWS
41 2ND QUARTER 2023 | SYNAPSE

an open letter asking ALL AI LABS TO IMMEDIATELY PAUSE FOR AT LEAST 6 MONTHS

/ Read original article here /

More than 1,100 signatories, including Elon Musk, Steve Wozniak, and Tristan Harris of the Center for Humane Technology, have signed an open letter that was posted online Tuesday evening that calls on “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”

The letter reads:

Contemporary AI systems are now becoming human-competitive at general tasks,[3] and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.

The letter argues that there is a “level of planning and management” that is “not happening,” and that instead, in recent months, unnamed “AI labs” have been “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”

The letter’s signatories, some of

whom are AI experts, say the pause they are asking for should be “public and verifiable, and include all key actors.” If said pause “cannot be enacted quickly, governments should step in and institute a moratorium,” the letter says.

Certainly, the letter is interesting both because of the people who have signed — which includes some engineers from Meta and Google, Stability AI founder and CEO Emad Mostaque, and people not in tech, including a self-described electrician and an esthetician — and those who have not. No one from OpenAI, the outfit behind the large language model GPT-4, has signed this letter, for example. Nor has anyone from Anthropic, whose team spun out of OpenAI to build a “safer” AI chatbot.

Wednesday, OpenAI CEO Sam Altman spoke with the WSJ, saying OpenAI has not started training GPT-5. Altman also noted that the company has long given priority to safety in development and spent more than six months doing safety tests on GPT-4 before its launch. “In some sense, this is preaching to the choir,” he told the Journal. “We have, I think, been talking about these issues the loudest, with the most intensity, for the longest.”

Indeed, Altman sat down with this editor in January, where he argued that “starting these [product releases] now [makes sense], where the stakes are still relatively low, rather than just put out what the whole industry will have in a few years with no time for society to update.”

Altman more recently sat down with computer scientist and popular podcaster Lex Fridman, and spoke about his relationship with Musk, who was a

cofounder of OpenAI but stepped away from the organization in 2018, citing conflicts of interest. (A newer report from the outlet Semafor says Musk left after his offer to run OpenAI was rebuffed by its other cofounders, including Altman, who assumed the role of CEO in early 2019.)

Musk is perhaps the least surprising signatory of this open letter, given that he has been talking about AI safety for many years and has more lately taken aim at OpenAI specifically, suggesting that the company is all talk and no action. Fridman asked Altman about Musk’s recent and routine tweets bashing the organization Said Altman: “Elon is obviously attacking us some on Twitter right now on a few different vectors, and I have empathy because I believe he is — understandably so — really stressed about AGI safety. I’m sure there are some other motivations going on too, but that’s definitely one of them.”

That said, added Altman, he finds some of Musk’s behavior hurtful. “I definitely grew up with Elon as a hero of mine. You know, despite him being a jerk on Twitter or whatever, I’m happy he exists in the world. But I wish he would do more to look at the hard work we’re doing to get this stuff right.”

We’re still digesting this letter (others are already tearing it to shreds).

42 SYNAPSE | 2ND QUARTER 2023
1,100+ notable signatories just signed
INTERNATIONAL NEWS

Hospitals around the world face huge internal logistic problems as a result of the labor intensive environment, many small packages need to be delivered on time from various depots to wards where patients need their medication, medical supplies, linen, and meals.

We piloted our delivery module on a custom configured robot to deliver scheduled drugs in a live hospital environment and found the following improvements: results over a 4 week period in a live hospital environment

find out more @ ctrlrobotics.com/better-hospitals

ctrl easily integrates to multiple robots for dynamic automation needs

2ND QUARTER 2023 |

EUROPE TAKES AIM AT CHATGPT

/ Read original article here /

A key committee of lawmakers in the European Parliament have approved a first-ofits-kind artificial intelligence regulation — making it closer to becoming law.

Key Points

 A committee of lawmakers in the European Parliament on Thursday approved the EU’s AI Act, making it closer to becoming law.

 The regulation takes a risk-based approach to regulating artificial intelligence.

 The AI Act specifies requirements for developers of “foundation models” such as ChatGPT, including provisions to ensure that their training data doesn’t violate copyright law.

The approval marks a landmark development in the race among authorities to get a handle on AI, which is evolving with breakneck speed. The law, known as the European AI Act, is the first law for AI systems in the West. China has already developed draft rules designed to manage how companies develop generative AI products like ChatGPT.

The law takes a risk-based approach to regulating AI, where the obligations for a system are proportionate to the level of risk that it poses.

The rules also specify requirements for providers of so-called “foundation models” such as ChatGPT, which have become a key concern for regulators, given how advanced they’re becoming and fears that even skilled workers will be displaced.

What do the rules say?

The AI Act categorizes applications of AI into four levels of risk: unacceptable risk, high risk, limited risk and minimal or no risk. Unacceptable risk applications are banned by default and cannot be deployed in the bloc.

They include:

 AI systems using subliminal techniques, or manipulative or deceptive techniques to distort behavior

 AI systems exploiting vulnerabilities of individuals or specific groups

 Biometric categorization systems based on sensitive attributes or characteristics

 AI systems used for social scoring or evaluating trustworthiness

 AI systems used for risk assessments predicting criminal or administrative offenses

44 SYNAPSE | 2ND QUARTER 2023
Privately held companies have been left to develop AI technology at breakneck speed, giving rise to systems like Microsoft-backed OpenAI’s ChatGPT and Google’s Bard.
INTERNATIONAL NEWS

 AI systems creating or expanding facial recognition databases through untargeted scraping  AI systems inferring emotions in law enforcement, border management, the workplace, and education

Several lawmakers had called for making the measures more expensive to ensure they cover ChatGPT.

To that end, requirements have been imposed on “foundation models,” such as large language models and generative AI.

Developers of foundation models will be required to apply safety checks, data governance measures and risk mitigations before making their models public.

They will also be required to ensure that the training data used to inform their systems do not violate copyright law.

“The providers of such AI models would be required to take measures to assess and mitigate risks to fundamental rights, health and safety and the environment, democracy and rule of law,” Ceyhun Pehlivan, counsel at Linklaters and colead of the law firm’s telecommunications, media and technology and IP practice group in Madrid, told CNBC.

“They would also be subject to data governance requirements, such as examining the suitability of the data sources and possible biases.”

It’s important to stress that, while the law has been passed by lawmakers in the European Parliament, it’s a ways away from becoming law.

Why now?

Privately held companies have been left to develop AI technology at breakneck speed, giving rise to systems like Microsoftbacked OpenAI’s ChatGPT and Google’s Bard.

Google on Wednesday announced a slew of new AI updates, including an advanced language model called PaLM 2, which the company says outperforms other leading systems on some tasks.

Novel AI chatbots like ChatGPT have enthralled many technologists and academics with their ability to produce humanlike responses to user prompts powered by large language models trained on massive amounts of data.

But AI technology has been around for years and is integrated into more applications and systems than you might think. It determines what viral videos or food pictures you see on your TikTok or Instagram feed, for example.

The aim of the EU proposals is to provide some rules of the road for AI companies and organizations using AI.

Tech industry reaction

The rules have raised concerns in the tech industry.

The Computer and Communications Industry Association said it was concerned that the scope of the AI Act had been broadened too much and that it may catch forms of AI that are harmless.

“It is worrying to see that broad categories of useful AI applications – which pose very limited risks, or none at all – would now face stringent requirements, or might even be banned in Europe,” Boniface de Champris, policy manager at CCIA Europe, told CNBC via email.

“The European Commission’s original proposal for the AI Act takes a risk-based approach, regulating specific AI systems that pose a clear risk,” de Champris added. “MEPs have now introduced all kinds of amendments that change the very nature of the AI Act, which now assumes that very broad categories of AI are inherently dangerous.”

What experts are saying

Dessi Savova, head of continental Europe for the tech group at law firm Clifford Chance, said that the EU rules would set a “global standard” for AI regulation. However, she added that other jurisdictions including China, the U.S. and U.K. are quickly developing their sown responses.

“The long-arm reach of the proposed AI rules inherently means that AI players in all corners of the world need to care,” Savova told CNBC via email.

“The right question is whether the AI Act will set the only standard for AI. China, the U.S., and the U.K. to name a few are defining their own AI policy and regulatory approaches. Undeniably they will all closely watch the AI Act negotiations in tailoring their own approaches.”

Savova added that the latest AI Act draft from Parliament would put into law many of the ethical AI principles organizations have been pushing for.

Sarah Chander, senior policy adviser at European Digital Rights, a Brussels-based digital rights campaign group, said the laws would require foundation models like ChatGPT to “undergo testing, documentation and transparency requirements.”

“Whilst these transparency requirements will not eradicate infrastructural and economic concerns with the development of these vast AI systems, it does require technology companies to disclose the amounts of computing power required to develop them,” Chander told CNBC.

“There are currently several initiatives to regulate generative AI across the globe, such as China and the US,” Pehlivan said.

“However, the EU’s AI Act is likely to play a pivotal role in the development of such legislative initiatives around the world and lead the EU to again become a standardssetter on the international scene, similarly to what happened in relation to the General Data Protection Regulation.”

European Executive VP Margrethe Vestager and European Internal Market Commissioner Thierry Breton give a media conference on the EU approach to AI in 2021.

Image: REUTERS

...continued from page 40

The proposed law also aims to establish a European Artificial Intelligence Board, which would oversee the implementation of the regulation and ensure uniform application across the EU. The body would be tasked with releasing opinions and recommendations on issues that arise as well as providing guidance to national authorities.

“The Board should reflect the various interests of the AI eco-system and be composed of representatives of the Member States,” the proposed legislation reads.

The Artificial Intelligence Act was originally proposed by the European Commission in April 2021. A so-called general approach position on the legislation was adopted by the European Council in late 2022 and the legislation is currently under discussion in the European Parliament.

“Artificial intelligence is of paramount importance for our future,” Ivan Bartoš, the Czech Deputy Prime Minister for Digitalisation, said in statement following the Council's adoption. “We managed to achieve a delicate balance which will boost innovation and uptake of artificial intelligence technology across Europe.”

Once the European Parliament adopts its own position on the legislation, EU interinstitutional negotiations—a process known as trilogues — will begin to finalise and implement the law. Trilogues can vary significantly in time as lawmakers negotiate sticking points and revise proposals. When dealing with complex pieces of legislation like the Artificial Intelligence Act, EU officials say, trilogues are often lengthy processes.

The proposed law also aims to establish a European Artificial Intelligence Board, which would oversee the implementation of the regulation and ensure uniform application across the EU ”

45 2ND QUARTER 2023 | SYNAPSE

METAPHYSIC CEO TOM GRAHAM BECOMES FIRST PERSON TO FILE FOR COPYRIGHT REGISTRATION OF AI LIKENESS

/ Read original article here /

Registering Copyright in Your AI Self Could Give You New Rights to Protect and Control Your Digital Identity in the Age of Generative AI and throughout the Internet, Metaverse and Web3

Tom Graham, CEO of generative AI pioneer Metaphysic, has made history today as the first person to submit for copyright registration his AI likeness with the U.S. Copyright Office. As the industry leader in creating hyperreal content powered by generative AI, Metaphysic champions individual ownership and control of their AI likenesses and biometric data. By leveraging legal institutions and existing law and regulation, Graham, through this submission, demonstrates the increasingly fine line between reality and computer-generated media as he and Metaphysic seek to create, for the first time, a new bundle of intellectual property rights that must be available to any individual in the future.

"Generative AI can create content that looks and feels real, and regular people's avatars can be inserted into content by third parties without their consent. This is not right, and we should never lose control over our identity, privacy or biometric data," said Thomas Graham, CEO of Metaphysic. "I hope that copyright registration of the photorealistic AI-generated version of myself will increase my ability to take action against unauthorized AI impersonations of myself in the future. Today's law supports that. We all need to work hard to ensure that future laws and regulations strengthen individual's rights and protect vulnerable members of society."

CEO Tom Graham Becomes First Person to File for Copyright Registration of AI Likeness Creating New Digital Property Rights

Producing the AI likeness required Graham to record a three-minute video of himself on a mobile phone to capture his likeness, voice, and biometric data. Once received, Metaphysic utilized its industry-leading hyperreal AI tools to create an AI avatar of present-day Graham. Graham put a lot of effort into creating and curating the training dataset and working with the team at Metaphysic to hone in on the AI look he wanted. Beyond iterating on the look of his AI likeness, Graham and the team also took steps to composite and merge the AI

...continues on page 49

46 SYNAPSE | 2ND QUARTER 2023
INTERNATIONAL NEWS
Metaphysic

Partner with Africa's leading data science

platform to access the power of 45 000 data scientists for your business.

Through our partnership program, organisations can tap into this pool of talent, access machine learning and AI solutions, and stay at the forefront of innovation.

Our quick turnaround means you keep improving with the smartest minds in data science on your team.

47 2ND QUARTER 2023 | SYNAPSE
C o n t a c t u s n o w f o r a f r e e c o n s u l t a t i o n w w w z i n d i a f r i c a t a m @ z i n d i . a f r i c a + 4 4 7 5 2 2 1 2 0 0 7 7 GET IN TOUCH R e a c h o u t n o w f o r a 2 0 % d i s c o u n t f o r a l l A I E x p o d e l e g a t e s

LAUGH AND LEARN: The Surprising Benefits of Chatbots in African Education

/ Read original article here /

Are you tired of boring old textbooks and monotone teachers? Do you wish you could learn with a side of laughter? Well, you're in luck! Chatbots are here to make education in Africa informative and fun!

Take the example of Sarufi, a chatbot developed in Tanzania to help with customer care issues. However, it's not limited to just customer care - it can also be customized to help with education issues. Imagine having a chatbot that can help you with your homework and make you laugh with witty oneliners. It's like having a study buddy that never gets tired! And it's not just Sarufi that's making waves in the chatbot space. There's also Kibuti, an offline chatbot that operates through SMS. It's designed to provide education and support to low-income families, who may not have access to the internet or expensive smartphones. With Kibuti, anyone with a basic phone can access valuable educational content and receive personalized support. These are just a few examples of the exciting possibilities of chatbots in education in Africa. With the right development and implementation, chatbots have the potential to transform the way students learn and teachers teach.

In fact, some students in Africa have reported feeling less stressed and anxious about exams when using chatbots as study aids. One student from Iyunga secondary school in Mbeya even said, "I used to dread studying, but now I actually look forward to it because Kibuti-Bot makes me laugh offline!"

48 SYNAPSE | 2ND QUARTER 2023
EDUCATION NEWS

And it's not just the students who are benefiting from these humorous chatbots. Teachers are finding that their classes are more engaged and motivated when using chatbots as teaching assistants. One Teacher from Mbeya University of Science and Technology (MUST) reported,

"My students used to tune out during lectures, but now they're laughing and learning at the same time. It's like magic!"

Of course, I agree that there are still some challenges to overcome with chatbots in education. One issue is ensuring that the chatbots are culturally sensitive and appropriate for the diverse African population. But with careful development and testing, chatbots can be a valuable tool for improving education in Africa.

So, the next time you're feeling down about school, remember that chatbots are here to make learning a little more lighthearted. Who knows, maybe you'll even crack a smile during your next calculus lesson!

As Kibuti Bot continues to gain popularity, we creators are already working on new and improved versions of the bot, with even more advanced features and capabilities. The next article in this series will dive deep into the origins of Kibuti Bot, exploring how it was created and the challenges its creators faced along the way.

Kibuti Contacts:

kibuti.co.tz, +255 745 051 250

Neema Derick, CEO,

Email: neydmphuru@gmail.com, +255 757 144 062

...continued from page 46

model output into the underlying video to create an accurate representation of a hyperrealistic AI version of himself.

As Metaphysic develops new technologies that shift the future of entertainment and the internet, maintaining data ownership and protecting individuals' rights will be critical to the mass adoption of AI technologies. This initial process of registering copyright in Graham's AI likeness provides a framework for how other individuals and public figures can take steps to protect their identities, performances and brands.

The information provided in this statement, including future comments and commentary concerning the subject matter does not, and is not intended to, constitute legal advice. All information in this statement is for general informational purposes only. It is both Graham and Metaphysic's interpretation of current law in the United States that Graham's AI likeness is a man-made work that qualifies for copyright protection within the meaning of the U.S. Copyright Act of 1976, as amended, and accordingly, registrable with the U.S. Copyright Office. However, it is not clear how laws and regulations will develop in the future or the extent to which registering copyright in an AI likeness will give individuals rights and remedies against third parties that infringe such copyright. Graham and Metaphysic hope that his actions advance the evolving discussion surrounding privacy and individual rights in the context of rapidly advancing generative AI technologies that are becoming increasingly realistic and indistinguishable from reality. Should further analysis or explanation of this subject matter be required, please consult with a qualified attorney for advice pertaining to your specific legal situation.

ABOUT METAPHYSIC

Metaphysic is the industry leader in developing Generative AI technologies and machine learning research to create immersive, photorealistic content at internet scale. Recently named the official generative AI partner for Miramax's forthcoming feature film "Here" and a strategic partner of Creative Artists Agency (CAA), Metaphysic's cutting-edge proprietary technology has positioned the company as the premiere partner for the biggest names in Hollywood and content creation. Metaphysic's team of machine learning researchers and generative AI pioneers are focused on building an ethical web3 economy where any person can own and control their biometric data while unlocking the future of creativity. Since 2018, the team behind Metaphysic has been the driving force behind the mass popularization of hyperreal synthetic media via its @DeepTomCruise channel and performances on AGT. Find out more about Metaphysic's technology at www.metaphysic.ai.

Media Contact

Factory PR, Eef Vicca: metaphysic@factorypr.com

49 2ND QUARTER 2023 | SYNAPSE INTERNATIONAL NEWS
Africa’s 4IR Trade & Innovation Magazine SYNAPSE AFRICA EMBRACES THE NEXT WAVE OF AI 1st QUARTER 2023 | ISSUE 19 AI EXPO AFRICA 2023 6TH EDITION LANDS IN JHB 2-3 NOVEMBER GENERATIVE AI - African Inspired Art LELAPA AI launch Vulavula CONVERGENCE PARTNERS launch African 4IR fund UIPATH launch National RPA qualification in SA

SYNAPSE

Africa’s 4IR Trade & Innovation Magazine

* All rates exclude VAT & agency commission. Rates are based on casual advertising. Discounted rates are available for longer ad / editorial runs

READERSHIP / SOCIAL MEDIA REACH

Synapse Magazine is Africa’s first and only business quarterly publication covering developments across the continent in Artificial Intelligence (AI), Data Science, Robotic Process Automation (RPA) and Fourth Industrial Revolution (4IR) smart technologies.

Synapse offers industry executives, practitioners, investors and researchers relevant news, in-depth analysis, and thought leadership articles on trends around 4IR innovation and digital transformation in industries that include banking, retail, manufacturing, healthcare, mining, agriculture, education, and government, among others.

With its insights, interviews and case studies, the magazine aims to be a voice for African 4IR practitioners, researchers, innovators, thought leaders, and the wider African AI community.

Since its launch in 2018, Synapse has amassed a combined readership of 31,300 across the Issuu platform (on which it is published), the AI Media Group’s email database, the AI Expo Africa Community Group on LinkedIn and the AI Media Group’s social media channels where the magazine is distributed. It also links to AI TV, Africa’s only dedicated YouTube streaming channel focused on 4IR business users and trade.

Over the years the magazine has established a significant following across Africa as well as globally, with readers from as far afield as the North America, South America, Europe and Asia. This makes Synapse a great marketing platform for startups and established tech companies to reach a broader community of buyers, investors and partners.

Readers around the world

MAGAZINE Published Quarterly O cial Publication of AI Expo Africa
YOUR AD / EDITORIAL FEATURE Advertising and artwork to be supplied as a high resolution PRESS-ready PDF of at least 300dpi. Art and editorial features to be submitted to: daniel.mpala@aiexpoafrica.com
SIZE RATE* Company Listing (logo, company description & hyperlink) R2500 1/4 Page (includes 1/4 page editorial) R3500 1/2 Page (includes 1/2 page editorial) R4500 Full Page (includes full-page editorial feature) R6500 Double Page Spread R10,000
REACH AFRICA'S LARGEST ARTIFICIAL INTELLIGENCE & 4IR COMMUNITY WITH SYNAPSE
RATE CARD BOOK
AD
Africa’s 4IR Trade & Innovation Magazine SYNAPSE AFRICA EMBRACES THE NEXT WAVE OF AI 1st QUARTER 2023 | ISSUE 19 AI EXPO AFRICA 2023 6TH EDITION LANDS IN JHB 2-3 NOVEMBER GENERATIVE AI - African Inspired Art LELAPA AI launch Vulavula CONVERGENCE PARTNERS launch African 4IR fund UIPATH launch National RPA qualification in SA

Turn static files into dynamic content formats.

Create a flipbook

Articles inside

CESAREAN DELIVERIES ARE RISING IN RWANDA – AI COULD REDUCE THE RISKS: Smartphone app helps health workers detect post surgery infections 

5min
page 37

LAUGH AND LEARN: The Surprising Benefits of Chatbots in African Education

8min
pages 52-55

METAPHYSIC CEO TOM GRAHAM BECOMES FIRST PERSON TO FILE FOR COPYRIGHT REGISTRATION OF AI LIKENESS

4min
pages 50, 53

EUROPE TAKES AIM AT CHATGPT

13min
pages 48-49

1,100+ notable signatories just signed an open letter asking ALL AI LABS TO IMMEDIATELY PAUSE FOR AT LEAST 6 MONTHS

9min
pages 46-47

EU AI ACT EXPLAINED

6min
pages 44, 49

SPENDING ON AI IN MIDDLE EAST AND AFRICA REGION TO SOAR TO $3 BILLION IN 2023

8min
pages 41-44

EGYPT LAUNCHES CHARTER FOR THE RESPONSIBLE USE OF AI

3min
page 41

CAN AI HELP SOLVE DIPLOMATIC DISPUTE OVER THE GRAND ETHIOPIAN RENAISSANCE DAM?

2min
pages 39-40

NASA-FUNDED SCIENTIST USES EO IMAGERY AND AI TO IMPROVE AGRICULTURE IN UGANDA

4min
page 35

KENYA: AI GUIDELINES FOR PRACTITIONERS LAUNCHED

2min
page 34

SA’S DATA, CLOUD BLUEPRINT IN THE WORKS, SAYS MINISTER

2min
page 33

FROM TEXT PREDICTION TO CONSCIOUS MACHINES: COULD GPT MODELS BECOME AGIS?

1min
pages 28-32

SPOTLIGHT ON AFRICA'S MOST PROLIFIC TECH INVESTOR

6min
pages 26-27

AI DIGITAL COURSE TARGETS COMMONWEALTH PUBLIC LEADERS

6min
pages 24-25

LEADING THE AI AND CV EDUCATION REVOLUTION IN AFRICA BY AUGMENTED STARTUPS

11min
pages 22-23

THE HARD POWER ECONOMICS OF AI FOR SOUTH AFRICA

12min
pages 20-21, 23

GLOBAL PERSPECTIVE: The era of Autonomous AI Agents

11min
pages 18-19

SOUTH AFRICA FACES MANY CHALLENGES IN REGULATING THE USE OF ARTIFICIAL INTELLIGENCE

8min
pages 16-17, 19

GENERATIVE AI & THE POPI ACT

4min
page 15

INFOREG EXAMINES REGULATION OF CHATGPT & AI IN SA

12min
pages 12-14

AFRICA MUST REGULATE AI TO REAP ITS FULL BENEFITS

5min
pages 10, 13

RETHINKING DATA GOVERNANCE for Just Public Data Value Creation and

11min
pages 8-9

LAUNCH OF T HE SA AI ASSOCIATION

6min
pages 6-7
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.