Why an Open-Source Future Can Make AI Work for Creatives

Our expert argues that discussions around ethical AI are missing the importance of open-source principles to a more equitable future, especially for creatives who want to use the technology.

Written by Kent Keirsey
Published on Jun. 05, 2024
Why an Open-Source Future Can Make AI Work for Creatives
Image: Shutterstock / Built In
Brand Studio Logo

At the risk of coming across as a low-effort ChatGPT output, the rapid evolution and distribution of applied machine learning is clearly transforming the technological landscape faster than most people can keep up with. 

Yesterday, AI meant one thing. Tomorrow, it might mean something entirely different. Despite the massive, constant news coming from AI companies, it’s still early days with this technology. Use cases are created before our very eyes, and we’re just scratching the surface of how this technology will transform our lives. 

Generative artificial intelligence, with an evolving ability to produce all kinds of media, has greatly expanded how we will create information using technology in the future. And as the world begins to explore how it can use AI for various tasks, both work and personal, a debate is brewing: Should AI remain a closely guarded, walled garden controlled by a few giants, or should it be democratized through open-source principles? 

Platform Model Vs. Open-Source AI

Many companies want to keep their AI-based services proprietary so that they can develop them into platforms. That model would lock users into their services because the barrier to switching is so high. By contrast, an open-source approach would allow users to exercise more choice in the tools they use.

More on Creative AIThe Next Space Primed for AI Disruption Isn’t What You Think


The Tech Industry Is Evolving

Let’s start with something we can all agree on: The dimensions upon which technology competes are shifting. Here’s why: 

Traditionally, better hardware or more efficient software drove technological advancement. When combined with innovative user interfaces, these improvements changed how we use technology. For example, before the App Store, smart phones weren’t very smart! After the App Store combined more capable integrated hardware with packaged software that streamlined the user interface, entrepreneurs were able to create household name companies like Uber, Instagram and Spotify. 

Today, AI models have added a new layer to this competition. Rather than being a predictable element of value delivery (i.e., input here guarantees a specific type of output there), AI models are dynamic, with the capacity to evolve as new information is created and integrated. Instead of writing code to instill new capabilities, engineers trained or fine-tuned the models with data to improve performance on specific tasks. 

In this new landscape, companies with proprietary AI models are striving to establish dominance. Their goal is to create platforms. We’re familiar with the platform strategy for technology — it has created some of the largest tech giants in the modern era. This happened in part because, once you use a certain platform, stopping or changing things up is very difficult. Ever try switching from iPhone to Android? It’s a pain! All your personal information needs updating, you need to login to multiple accounts again, probably change a bunch of passwords you forgot, etc. This is by design. There’s a reason Apple’s group chats have had blue bubbles for as long as they have. 

And AI platforms are continuing down this path. Users invest their personal information so that AI can build personalized experiences that offer significant value to them but are non-portable. That is, users lose that personalization if they switch platforms because that fine-tuning is incompatible, and they become increasingly locked in to one platform.

This platform approach, combined with a perpetual, recurring fee structure, explains the massive investment from angel investors and venture capitalists in new start-up ventures, as well as rapidly expanding budgets for AI at companies like Meta, Google and Microsoft. Similarly, leaders in this space are advocating against letting any would-be competitors join the race. By controlling the development and distribution of AI models, companies aim to become indispensable, extracting continuous value from users over the long term. This approach works for individuals, businesses and governments alike, who personalize the technology with their time, data and integrated workflows. 


Empowering Creatives Through Open-Source AI

Although proprietary models currently dominate many mainstream conversations around generative AI, open-source models offer a compelling alternative. Unlike closed AI providers like OpenAI, which offer access to their tools as a service, open models allow individuals to download the model files (known as weights) and use them in any compatible application. 

The open-source AI ethos means that individuals can specialize, fine-tune, and own the core brain powering their personalized tools. Unlike proprietary models that confine users to a single platform, open-source models offer flexibility and adaptability. They are portable across applications and can serve as long-term assets for individuals and businesses alike. 

AI models have become powerful tools for early adopters in creative disciplines — artists, writers, musicians — enabling them to enhance their work and explore new realms of creativity. For these individuals, models have become tools in their creative process. By fine-tuning these models with unique data sets, creatives can generate unique, compelling content that reflects their distinctive vision. 


How Will AI Models Impact Intellectual Property?

Only after making this distinction can we really begin to see what’s at stake in this debate. AI models themselves represent a new form of intellectual property. 

Many modern movements for ethical AI argue that licensing data from rights-holders to train AI models is the only responsible and ethical approach to building models. A system in which those building the models must license data in order to train models poses a significant risk to the potential for open-model weights, however. In order to evaluate the paths forward, we must determine which path best serves those being most impacted by the technology, that is, the creatives now competing with its capabilities.

Over the past century, an industry has arisen around collecting, aggregating, and managing rights. For most commercial creators, the rights to their work were either purchased outright or lost through the fine print of terms of service agreements on sites where they uploaded content before the advent of generative AI. They won’t be the beneficiaries of any rights agreements. 

Further, the lion’s share of revenues for any licensing regime will be the large corporations who serve as the intermediaries of licensing deals. Just ask creators who had their work used by Adobe to train its generative AI Firefly product. Sold and marketed as ethical AI, these creators had their work legally used to train the model. Their reward? A one-time bonus in September of 2023 — in the range of $25 to 50 for the average creator. Meanwhile, Adobe will use the valuable data and training to sell product seats for years to come

Without openly licensed models, individuals will face higher prices for access to models that perform worse without providing more data to the platform, and ultimately risk being effectively locked in without a viable or economical path to switch providers. Worst of all, this eliminates the ability for creators to truly own this new class of intellectual property. 


Open-Source Alternatives

Creators might retain the autonomy to develop, modify and distribute their innovations without the constraints imposed by proprietary systems, however, if they have access to the foundational resources to do so, with licenses that empower them to own their work. 

In this type of ecosystem, creatives could choose to freely share their insights and customizations to accelerate creative experimentation with technology. Further, new assets like professional-grade custom models could develop a new marketplace, allowing a new mechanism for artists and creatives to commercialize their work. Creatives would have something that they lack in a world with closed models: a choice.

Open-source models make the world’s information accessible to everyone, including individuals and small businesses. The ability to develop models will be an incredibly important component for competing in almost every industry going forward. Creatives are no exception.

Although many people involved in the debate about ethical AI focus on training data, with concerns that some essential component of the trained works has been unjustly copied, the proposed systems of royalties, licenses and more leave those same individuals no better off in the age of AI. This is a crucial distinction in the debate for ethical AI, which I argue should focus on outcomes. In a world of AI, creatives need access to the foundational elements of the technology in order to be able to compete effectively in the market. We should be prioritizing a focus on systems that give the people being impacted by the technology access to benefit from it, with a strong ecosystem of open foundational models and tools.

More on Open-Source AI3 Reasons AI Should Be Open Source


An Open-Source Future Is Brighter

Open-source AI is more than just a trend: It’s a movement supported by those who believe that the future should be more equitable than the past. One of the greatest technological developments in human history should not be monopolized to cause inequity. 

By providing the flexibility to customize and adapt models to unique needs while protecting individual data, advocating for and supporting open-source AI is imperative. With accessibility, we lower the barrier to entry for creation and innovation, enabling everyone to participate in the new value economy.

Hiring Now
Cloud • Greentech • Social Impact • Software • Consulting