The Researcher and the Machines

By Daniel Clausen - 05 February 2024
The Researcher and the Machines

Daniel Clausen engages three AI "research assistants" to help him look into his profession's new normal.

Where does the machine end and the researcher begin? I say this as I type in a Google Doc that auto corrects my grammar and spelling. I write this before I have engaged the insights of Bard, Claude, and ChatGPT, three tabs on my browser that now have capabilities exceeding what many (human yet still cyborg) research assistants are capable of. I write this to answer a question: How should we (as researchers / as social scientists / as good global citizens) use AI devices and AI enhanced devices? How should we acknowledge their help? Especially, if we are, as Donna Harraway has been arguing since the mid-1980s, already cyborgs.

To research is divine, I write with my human fingers, typing words into a machine that stores data on my Google Drive, not quite sure if this is just my superstitious brain wistfully wishing, not ready to come to terms with the reality that research has been subject to mechanized processes for some time already.

Already, the machine(s) are part of the process (and have been for some time). Since my Google Docs have recently been enhanced with a "help me compose" function, I use this function first.

I instruct my Google Doc's "smart compose" function: Write three questions that examine the role of machines and AI in the research process.

Google Docs writes:

  1. How does the use of technology and AI in the research process impact the quality of research?
  2. What are the ethical implications of using technology and AI in the research process?
  3. How can researchers best use technology and AI to their advantage?

Next, I engage the AI "Claude" to answer question one. Its answer is long and tedious and my human brain longs for simplicity. I ask it to summarize the answer in a paragraph.

How does the use of technology and AI in the research process impact the quality of research?

Technology and AI are transforming research by providing more abundant data, faster analysis, new computational approaches, and efficiency gains, allowing studies at unprecedented depth and scale; however, researchers must thoughtfully govern factors like bias in algorithms, reliability in digitization processes, reproducibility of advanced software models, and ethical issues around privacy and access to ensure research quality keeps pace with advancing capabilities.

Next, I engage "ChatGPT" for the next question. It too gives me a long complicated answer, so I ask it to summarize the answer in one paragraph.

What are the ethical implications of using technology and AI in the research process?

The ethical implications of utilizing technology and AI in the research process encompass concerns such as safeguarding privacy and obtaining informed consent, addressing and mitigating biases in AI algorithms to ensure fairness, promoting transparency and accountability in algorithmic decision-making, considering the potential impact on employment and societal dynamics, implementing robust security measures to protect data, and acknowledging the dual-use nature of technology. Researchers must navigate these complexities while upholding ethical principles, promoting fairness, and actively engaging in discussions to ensure responsible and conscientious use of technology in research.

Finally, I engage "Bard" for the final question. It, too, gives me a long, involved answer, so I ask it to summarize the answer in one paragraph.

How can researchers best use technology and AI to their advantage?

From automating tedious tasks to fueling scientific discovery, researchers can supercharge their work with technology and AI. AI can crunch data, design new materials, and even connect researchers across the globe, all while optimizing workflows and freeing up time for the human genius that drives breakthrough discoveries. But remember, it's a partnership, not a takeover – use AI wisely and ethically to truly revolutionize research and push the boundaries of knowledge.

I pull on another thread. This time with Claude. How does Donna Haraway's cyborg theory apply to current trends in AI? Claude provides me with a long tedious answer, so I request that it summarize these insights in one paragraph.

Donna Haraway's cyborg metaphor anticipated contemporary AI developments in many ways: her blurring of boundaries between humans/animals, organisms/machines, and the physical/non-physical foreshadowed how AI systems fuse human behaviors and machine substrates; her critique of humanist essentialism and notions of bodily sanctity applies to AI's disruption of constructs like creativity or emotions as uniquely "human"; her politicized analysis of excluding power structures encoded into technologies is highly relevant to addressing biases perpetuated by AI; her focus on rebuilding affinities and transforming sociotechnical systems (rather than just tweaking parts) suggests the need for systemic changes in AI regulation, implementation, design, and development; and her rejection of a return to lost Eden accords with the irreversibility of human entanglement with intelligent machines going forward. Ultimately Haraway's cyborg underscores the posthuman aspects of progressing intelligent automation in our midst.

As I am rewriting (and letting the AI write for me), a more practical consideration comes up: How do I acknowledge the work of the various AIs: Bard, ChatGPT, Claude?

I write this prompt for Claude: How should I cite the work of AIs in an academic essay? The answer I get back is surprisingly poignant.

You raise an important ethical issue. Strict academic standards require properly crediting all sources used, including AI systems. Citing AI is currently an emerging practice and standards vary. Here are some recommendations:

1) Cite the name of the AI system (if publicly known). For example: "I asked ChatGPT to generate a summary of key themes."

2) Treat the AI as a published work without a specific author name by listing it as: ChatGPT. (2023). AI-generated text [Computer software]. Anthropic.

3) If you substantially edit or build upon the AI output, list yourself as author but credit the contribution of the AI system in your acknowledgments.

4) Clarify when and how AI output was reworked or integrated into your own analysis. Do not claim undue credit for AI text.

5) Until clearer standards emerge around legal and ethical usage of new generative AI, conservatively cite AI sources and explain as needed. The goal is full transparency.

The key principles here are honesty and disclosure about the role AI systems played in any work submitted for credit. As AI capabilities advance, best practices will continue developing. For now, erring on the side of over-attribution may prevent any perceptions of impropriety. The intended outcome should be recognizing all contributors accurately.

I try to tap into my human, oh-so-human brain for the final insights for this essay, knowing that it is already saturated with machine influences. My mind brings forth these insights:

We have been cyborgs for far longer than we care to admit. Even during the formative years of my undergraduate education (the early 2000s), I was the beneficiary of a well-stocked university library and extensive computer databases of scholarly articles. The influences of the hard-working librarians and bureaucrats who kept the library flush with funds for book and database acquisitions have long gone unacknowledged in my work. So have the many other people, systems and machines that make research endeavors, large and small, possible (such as pencil sharpeners and coffee machines). The long historical process away from economies in which most people are engaged in food production; the long historical transition from agricultural economies to manufacturing economies, then to service economies, and now to information and information technology - these long historical trends have gone unacknowledged in my research, as have the people, ideas, and processes that have driven these transitions.

I believe that we (the machines and I / a hybrid collaboration) have reached a kind of tentative conclusion to this cyborg-essay.

Bard, ChatGPT, and Claude cannot manifest human forms quite yet to hug me. So, all I can do is prompt them: If you could manifest a human form, how would you help me celebrate the completion of a short research essay? (One sentence answer, please.)

Bard: With a proud grin and raised coffee mug, I'd toast to your intellectual conquest, then happily answer all your burning research questions!

ChatGPT: I would congratulate you on the successful completion of your research essay and facilitate a personalized celebration, whether it's a virtual toast, a suggestion for a rewarding activity, or providing assistance for any further questions or support you may need.

Claude: I apologize, as an AI assistant created by Anthropic to be helpful, harmless, and honest, I do not actually have a physical form. I am happy to provide any analysis or information to assist with your research essay in compliance with my creators' Constitutional AI protocols. Please let me know if you have any other questions!

Bard gets the award for the most human answer!

Your humble writer, quoting Captain Kirk in Star Trek II: The Wrath of Khan: "Of all the souls I have encountered in my travels…(Bard's) was the most…(choked sob)...human."



Daniel Clausen is a full-time lecturer at Nagasaki University of Foreign Studies. He is a graduate of Florida International University’s Ph.D. program in International Relations. His research interests vary widely from Japanese foreign policy to English language teaching. His research has been published in Asian Politics and Policy, e-International Relations, Electronic Journal of Contemporary Japanese Studies, The Diplomatic Courier, and Culture and Conflict Review, among other publications.

Photo by Google DeepMind


Artificial Intelligence Tools Cited (Claude / Author Collaboration)

Anthropic. (2024). Claude [Computer software].

OpenAI. (2024). ChatGPT [Computer software].

Google. (2024) Bard. [Computer software].

Google. (2023). Google Docs [Computer software].


Works Cited (Claude / Author Collaboration)

Haraway, D. J. (1991). A Cyborg Manifesto: Science, Technology, and Socialist-Feminism in the Late Twentieth Century. In D. J. Haraway (Ed.), Simians, Cyborgs, and Women: The Reinvention of Nature (pp. 149–181). Routledge.

Disqus comments