We originally thought artificial intelligence would help us automate tasks, deploying robots for activities requiring a physical effort or repetitive tasks. However, during the last months, another aspect of AI has emerged: generative models. 

5 Questions AI Raises

1. Will AI create a job apocalypse for creatives?

2. Is AI moving too fast for its own and the greater good?

3. How will regulations affect the use of AI?

4. How will AI change how we learn?

5. How will AI affect the future?

DALLE-2, Stable Diffusion and ChatGPT now live in the intersection between AI and creativity. They have become an instant trending topic, not only in the tech industry but across nearly all industries. 

Actually, the onset of these generative models have sparked several open debates around ethical concerns, jobs replacement and copyright issues. They have also helped us better understand the limitations of AI. 

As more philosophical questions emerge, people are turning to the bots and the results are interesting. For example, when LaMDA was asked about its sentience, it claimed to have emotions and awareness of its existence. 

Namely, people are rethinking creativity and the implications of generative models. What separates our thought process from an AI-powered technology? 

Read More About AIHow Does AI-Generated Art Work?

 

Will AI Replace Creative Professionals?

Will artists, designers, content writers and other professions be replaced soon? In economic terms, this is potentially a trillion-dollar question.

While AI is likely to replace many jobs, as other disruptive technologies have in the past, it also opens new opportunities. People will still be needed to interact with AI and apply the technology. Thus, people will not be replaced by AI. Rather, people not using AI will be replaced by people using AI. 

In the same way the internet became indispensable, governments, companies and people will sooner or later accept that AI is here to stay.

 

Is AI Moving Too Fast?

The application of AI in many industries is speeding up business capabilities, such as aiding decision-making, providing more personalized services and increasing productivity. 

Technology raises production capabilities of companies and people, analyzing vast amounts of information efficiently in real time. Workers may perform their tasks faster and with more precision, delivering high quality results. An increase in productivity creates economic growth. This should translate to better working conditions and better quality of life. 

However, as AI systems become more complex, they also become more difficult to understand and control, even for technical people. Additionally, biases in data leads to inaccurate results and unfair outcomes. 

Transparency in AI systems is therefore essential to mitigate risks. 

 

How Will AI Affect Human Lives?

The risks of using AI in healthcare are high, especially if we attempt to replace humans altogether. However, using AI as a tool in healthcare can have a net positive effect. 

It can, for example, help doctors to make better clinical decisions. It may also be helpful in increasing access to healthcare. Also, personalized medicine can help to provide unique treatment to each individual, based on their unique medical history and conditions. 

Advances in healthcare technology will use neuroscience and AI to track our health in real time, notifying us when something is not right. Technology may even be used to replace body parts

People may be healthier, as a result. We may live longer. But is it ethical to improve ourselves with devices? Are we humans or cyborgs? 

In the same way some people have access to better healthcare, rich people will have greater access to improve themselves. This will inevitably widen inequalities. 

Also, in this context, what would happen with sports? For instance, athletes could use AI to improve the composition of their muscles. 

Where do we draw the line?

 

How Will Regulations Affect AI?

All industries are affected by AI and require regulations. Just to name some examples, regulators have to discuss what happens when AI makes a mistake in healthcare or how copyrights will be affected in the creative industry. How should a self-driving car behave in certain circumstances?

We tend to rely more on machines than on humans nowadays. However, we need to consider that AI systems are not perfect and also make mistakes. We must determine what should happen beforehand. This issue is fairly difficult, as ethical implications depend on culture. Sometimes there is no right or wrong answer. 

As Sam Altman, CEO of OpenAI, mentioned in a recent interview with StrictlyVC,  the public should decide how and when AI should be used. 

However, society may need time to digest the shocks of disruption and take action. 

People from different backgrounds — neuroscientists, philosophers, doctors, engineers and more — should work together to define the best use of AI. Through collaboration, they can design more robust systems that take into consideration multiple points of view. 

 

How Will AI Change How We Learn? 

AI in learning is another important use case, as education is the foundation upon which we build our future. Some schools are banning the use of ChatGPT. Other professors see this chatbot as an opportunity to use AI as a tool and integrate it into the classroom. 

Regardless of AI adoption, the need for personalized learning is perhaps unavoidable. We all have different ways of learning. But with AI, we can identify learning pathways that fit our way of seeing the world while simultaneously challenging ourselves. 

Read More About AI5 AI Trends to Watch in 2023

 

What Is The Future of AI?

Technology is moving forward rapidly. 

Last year, we were surprised with new advances month-to-month in AI from generative models: including text generation with GPT3 and ChatGPT; image generation with DALLE-2 and Stable Diffusion; code generation with Codex; speech recognition with Whisper; and reinforcement learning with AlphaTensor

Big tech companies like Google, Open AI, Deep Mind and Meta AI are researching day and night to improve these capabilities, so greater strides are likely this year.

AlphaTensor showed AI doesn’t even need human data to learn. Given certain rules, the algorithm can learn from its own mistakes and arrive at the best optimization for matrix multiplication. So, we can design a complex problem in terms of rules for a game. We can put the AI into play and let it find the best combination of parameters that wins that game.

However, we don’t have only one complex problem on our hands. We have several. 

In the future, multiple AI systems will be at play. While each one will be charged with resolving a specific task, they will communicate and complement each other, the same way humans do when we work in teams.  

The element of collaboration is a crucial step on the path forward to reaching artificial general intelligence (AGI), an intelligent agent able to understand, learn and perform any intellectual task like humans can.

Expert Contributors

Built In’s expert contributor network publishes thoughtful, solutions-oriented stories written by innovative tech professionals. It is the tech industry’s definitive destination for sharing compelling, first-person accounts of problem-solving on the road to innovation.

Learn More

Great Companies Need Great People. That's Where We Come In.

Recruit With Us