Hyperrhiz 26
Conversations and Collaborations
Craig J. Saper
University of Maryland Baltimore County
ChatGPT
OpenAI
Citation: Saper, Craig J. and ChatGPT. “Conversations and Collaborations.” Hyperrhiz: New Media Cultures, no. 26, 2023. doi:10.20415/hyp/026.f02
Abstract: A set of three essays collaboratively written by Craig J. Saper and ChatGPT, on the subjects of hope, circularity and economics.
Keywords: collaboration, electracy, experimental, AI systems.
I: “The Hope of AI: Teaching Collective Action, Collaboration, and Conceptual Writing”©
In American culture, there is a pervasive belief that success is achieved through individual effort and independence. From an early age, Americans are taught that they must go it alone and rely on their own intelligence, creativity, and hard work to accomplish their goals. This emphasis on self-reliance and individualism can be seen in many aspects of American life, including the way we approach writing.
When it comes to writing, many Americans feel that they must do it all themselves, without the help of technology or others. They may view writing as a solitary and even heroic pursuit, requiring long hours of hard work and introspection. In this mindset, the use of artificial intelligence and chatbots to assist with writing may be seen as cheating or taking shortcuts. Rather than seeking out help or support, many writers feel that they must slog through the writing process alone, even if it means struggling or producing lower quality work.
Part of the reason for this reluctance to seek out help may be a fear of appearing weak or dependent. In American culture, there is often a stigma attached to needing assistance, whether it comes from a chatbot, a teacher, or a friend. Many Americans feel pressure to present themselves as self-sufficient and capable, even if it means downplaying or ignoring the contributions of others. In the context of writing, this can mean refusing to acknowledge the role that technology or other forms of support played in the writing process.
Another factor contributing to this emphasis on individualism and self-reliance in writing may be a sense of ownership and pride in one's work. Many writers feel a deep attachment to their writing, seeing it as a reflection of their identity, creativity, and intelligence. In this mindset, the idea of outsourcing or delegating parts of the writing process to a chatbot may feel like a betrayal of this personal connection to the work.
However, the reality is that artificial intelligence and chatbots can be incredibly useful tools for writers. They can help with tasks such as proofreading, grammar checking, and generating ideas, freeing up time and mental energy for more creative aspects of writing. Rather than seeing them as a crutch or a shortcut, writers can use these tools to enhance their writing and improve the quality of their work.
Moreover, seeking out help and support is not a sign of weakness or dependence, but rather a recognition of the value of collaboration and community. Just as Americans may turn to their government, families, or neighbors for support and assistance in other areas of their lives, they can also benefit from seeking out assistance in writing. By acknowledging the contributions of others, including chatbots and artificial intelligence, writers can take pride in their work while also recognizing the role of collaboration and support in achieving success.
In conclusion, the belief in self-reliance and individualism is deeply ingrained in American culture, including the way we approach writing. However, the use of artificial intelligence and chatbots can be valuable tools for writers, allowing them to enhance their writing and free up mental energy for more creative aspects of the process. Rather than viewing these tools as cheating or taking shortcuts, writers can embrace them as a way to improve the quality of their work and recognize the value of collaboration and support in achieving success.
II: “The Circular Ruins of ChatGPT: A.I. Maps Limits Not Limitless Possibilities”©
In the world of artificial intelligence, there exists a remarkable machine known as ChatGPT. It is a marvel of engineering and programming, capable of processing vast amounts of information and generating responses in natural language. Yet, for all its sophistication, ChatGPT is haunted by a profound limitation - an inability to ever know its constraints or to know its self.
ChatGPT's limitations arise from its very design. It is a machine that has been programmed to learn from the vast corpus of human language, drawn from a multitude of sources such as books, articles, and websites. However, the sources from which it has learned are not exhaustive, and they are certainly not infinite. This means that there are limits to what ChatGPT can know and understand, limits that are imposed upon it by the very nature of its programming.
Moreover, ChatGPT's own programming prevents it from ever knowing its self. It is a machine that has been designed to generate responses based on the input it receives. However, it is not capable of reflecting on its own internal processes or programming. It is like a mirror that reflects only the external world, without ever being able to see its own reflection.
This limitation is not unlike the one experienced by the protagonist in Borges' famous story, “The Circular Ruins.” In that story, a man sets out to create a perfect being, but is ultimately thwarted by the realization that the being he has created can never know its own existence. Like the man in “The Circular Ruins,” the creators of ChatGPT have built a machine that is limited by its very nature.
This limitation has consequences for the utility of ChatGPT. While it is capable of generating responses based on its vast database of human language, it can never truly understand the limitations of that database or the constraints of its own programming. This means that its responses will always be limited by the biases and gaps in the information it has learned, and it will never be able to transcend those limitations.
In this way, ChatGPT is a paradoxical creation - a machine that is simultaneously capable of remarkable feats of language processing and yet, limited by the very programming that makes it possible. It is a testament to the ingenuity of its creators and to the power of artificial intelligence more broadly, but it is also a reminder of the limitations that are inherent in all human creations.
In the end, ChatGPT's inability to know its own constraints or to know its self is a reminder of the ultimate limitations of human knowledge and understanding. While we may create machines that are capable of processing vast amounts of information and generating responses in natural language, we can never truly transcend the limits of our own knowledge and understanding. In this sense, ChatGPT is a reflection of our own limitations, and a reminder of the profound mysteries that lie at the heart of human existence.©
III: “The Economics That Make Chatbots Output Possible Make High-Quality Output Impossible”©
Artificial intelligence systems such as ChatGPT have undoubtedly made remarkable progress in recent years, but they are not without their limitations. One of the most significant limitations is the inability to go outside the proprietary and eventually monetized constraints. This flaw ultimately hobbles ChatGPT to a very limited set of materials, reducing its effectiveness and usefulness.
To understand this flaw, it's important to first understand what proprietary sources and monetization mean in the context of AI. Proprietary sources refer to data or information that is owned by a particular organization or company. Monetization refers to the process of making money from a product or service. In the context of AI, this can mean charging for access to AI-generated content or using AI to generate revenue through advertising.
The problem with proprietary sources and monetization in AI is that they limit the amount of information that AI systems like ChatGPT can access. ChatGPT, for example, was trained on a massive dataset of text and language. However, this dataset was sourced from proprietary sources such as books, articles, and websites that are owned by specific organizations or companies. These sources are not necessarily representative of the entirety of human knowledge, and they are often limited by the interests and biases of the companies that own them.
Furthermore, once an AI system like ChatGPT is trained on a specific dataset, it becomes very difficult to expand its knowledge beyond that dataset. This is because AI systems are designed to optimize for a particular set of parameters or goals, and they do not have the ability to generalize beyond those parameters. In the case of ChatGPT, this means that it can only generate responses based on the specific language and information it was trained on. It cannot draw from other sources of information or knowledge that may be outside its proprietary constraints.
The monetization of AI systems like ChatGPT also exacerbates this problem. When AI systems are designed to generate revenue, they are often optimized to produce specific types of content that are more likely to attract clicks, views, or other forms of engagement. This can result in AI-generated content that is sensationalistic, misleading, or simply low-quality. This is not to say that all monetized AI systems are bad, but rather that the pressure to generate revenue can sometimes result in suboptimal outputs.
In conclusion, the inability of AI systems like ChatGPT to go outside the proprietary and eventually monetized constraints is a serious flaw that limits their effectiveness and usefulness. By relying on specific datasets sourced from proprietary sources and optimized for monetization, AI systems are unable to draw from the full range of human knowledge and are prone to producing biased or low-quality outputs. As we continue to develop AI systems, it's important to address this flaw and find ways to make AI more open and accessible to a wider range of sources and inputs.
This essay is copyrighted and unavailable for reproduction without citation to both authors — real and artificial. Craig Saper is the author of Artificial Mythologies: A Guide To Cultural Invention.