What does ChatGPT mean for the open source community?

The open source community thrives on the genuine interest in sharing. That's something ChatGPT cannot emulate.
4 readers like this.
Team communication, chat

Machine learning and "artificial intelligence" is on a lot of people's minds right now, largely because of ChatGPT. With its widely-available public demonstration, the not-so-aptly named OpenAI group (ChatGPT is not open source) has shown the public that when you point a whole cloud of computing power back at the internet, you can generate believable text about nearly any subject. As many people have pointed out, there's a big difference between "believable" and "correct" of course, but on a superficial level it seems like ChatGPT is a valid source of surface-level summary output. ChatGPT isn't open source, but nearly everything it outputs is based on open knowledge. It's based on content you and I have put onto the internet for others. Does this mean ChatGPT has joined a community? Is ChatGPT contributing to improving shared knowledge? Or does it just reduce how many internet searches you have to do before arriving at a general idea of what might be an answer to your question?

Benefits of the contrary

You're probably a member of some community, whether it's a community of an open source project or even your local neighborhood. Either way, you've probably noticed that sometimes people can be annoying. It's a fact of life that people have opinions, and often those opinions are in conflict with one another. When there's a disagreement over how something ought to be done, it usually feels like time's being wasted. After all, you know the best solution, but instead of putting it into action, you have to spend all day convincing everyone else of its merit. It would be so much easier if everyone would just agree with you, right?

Disagreement is also uncomfortable. It leads to difficult conversations. You have to find a compromise or else convince somebody to see things your way, even as they try to convince you to see things their way. It's not easy, and it's often not what you want to be doing at any given time.

Of course, most adults understand that there's power in the contrary. A bot might be able to emulate a contrary opinion, but there's a difference between an opinion and stubbornness or obstinacy. Differing opinions, formed from expertise and experience, are vital for successful and fruitful collaboration. As uncomfortable as they may be, differing opinions on the "right" way to do something is the best way to stress test your ideas. By looking at the contrary, you can identify your preconceptions, biases, and assumptions. By accepting differing opinions, you can refine your own.

Spark of originality

A bot armed with machine learning can only invent ideas from existing ideas. While there may be value in distilling noise into something singularly tangible, it's still just a summary of notions that have come before. A gathering of real human actual minds is powerful because of the seemingly irrelevant and unexpected ideas that form from conversation, iteration, agreement, disagreement, and diversity of experiences and backgrounds. It might not make logical sense for me to base my CI/CD pipeline on the strategy I invented for last night's tabletop roleplaying game, but if that served as inspiration for something that ends up being really good then it doesn't matter in the end. There's an irrationality to interpreting the world through your experience embroidering or gardening or cooking or building LEGO sets with your kid, but that doesn't make it invalid. In fact, it's the ability to connect inspiration to action that gives birth to invention. That's not something ChatGPT can learn from the internet.

System design

ChatGPT and other AI experiments may well have their use in reducing repetitious tasks, or for catching potential bugs, or for getting you started with a particularly confounding YAML file. But maybe the hidden message here is actually a question: why do we think we need ChatGPT for these things? Could it be that, after all, these processes need improvement themselves? Could it be that maybe writing some "simple" YAML isn't as simple as it at first seemed? Maybe those bugs that need an artificial intelligence to catch are less a disease than a symptom of over-complex language design or a failure in how we teach code, or just an opportunity to develop easier entries into programming.

In other words, maybe machine learning bots aren't the solution to anything, but an indication of where we're doing a disservice to ourselves. In open source, we design the systems we interact with. We don't have to design chat bots to help us understand how the code works or how to program, because we're the inventors. We can redesign around the problems. We don't need a chat bot to coalesce and condense the confusion of the worldwide community, because we can create the best solution possible.

Human connection

Community is about people. Making connections with other people with a shared interest and passion for something is what makes communities so fulfilling. Both the disagreements and the moments of shared inspiration are profound experiences that we humans bring to one another in our forums, chat rooms, bug reports, conferences, and neighborhoods. As an open source community, we create technology. We create it openly, together, and with a genuine interest in sharing experiential knowledge. We value diversity, and we find value in the perspectives of novices and experts alike. These are things you can't distill in machine learning chat bot, whether it's open source or not (and ChatGPT is not).

The open source community thrives on the genuine interest in sharing. That's something ChatGPT cannot emulate.

Seth Kenlon
Seth Kenlon is a UNIX geek, free culture advocate, independent multimedia artist, and D&D nerd. He has worked in the film and computing industry, often at the same time.

3 Comments

Great insights Seth! You've helped me to sort this out. I'm fascinated by ChatGPT and other AI instances. However, there is a lack of human connection and insight. Definitely no community and it occurs to me that we only get responses on the data that's fed into these engines. Helpful in some instances but without context and certainly minor voices are never heard.

Love this entire perspective on ML and AI lacking the human component. Let's face it - AI is not human, nor will it ever be. That is, of course, unless I myself happen to be a bot programmed to be unaware of the fact that it is one. At the moment I see and hear lot of grandiose speculation going around on the superpower of ML and AI - most of which seems less anecdotal than it is coming from a trickle of reporting regarding the ChatGPT service and the experience a handful of individuals have had thus far. The tube has great reach these days. The implications coming from the chatter far outweigh the actual capability of the OpenAI service and unsurprisingly are very human in both pessimism and optimism regarding its potential. I won't deny the awesomeness and power of ML in its current and potential state, but for now I see most of the compute spend heading in an automation direction (mundane taskery) and actually surprised at the amount of hype increase when considering competent ML procedural work has been around for almost a decade now, if not longer.

An issue I have not seen covered much is important to many open source communities: licensing.

Many orgs will not accept even human originated contributions that were copied from another project with an incompatible license. What are they supposed to do with an AI generated code snippet from an AI entity trained on “open” code that may be publicly visible, but sourced under such an incompatible license?

Creative Commons LicenseThis work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License.