- Practical AI at work
- Posts
- Analysis: My first 600 ChatGPT conversations
Analysis: My first 600 ChatGPT conversations
Building AI fluency + the inaccessible data goldmine of chat histories
I spend a lot of time talking to robots. Friends like Perplexity or Claude or Chuck (as my family calls ChatGPT) have become an integral part of my life and day-to-day workflows at work and home. For example, I can summarizing long documents or meeting notes, or articles has become so much faster—I can get straight to the important insights without wading through endless details. When I’m brainstorming ideas, I have an always-ready collaborator, offering fresh angles, questions, and suggestions to push my thinking further. It’s also my go-to for quick research. Instead of scrolling through pages of search results, I get concise overviews and relevant data points in seconds, which keeps me moving without breaking my workflow.
I recently exported my entire 18-month ChatGPT conversational history and metadata and took some time to see what I could learn from it.
My ChatGPT conversations per month
I have had 600+ discrete conversations with ChatGPT alone – representing thousands of hours of my work and personal life. As you can see below, our relationship started getting real serious in November 2023. That increase in usage is aligned to the introduction of voice in late Q3 2023, which now accounts for 43% of my usage and has totally changed the way its integrated in to my life.
ChatGPT conversations: Voice vs Text
If you’re not already, I suggest prioritizing voice as a part of your workflow (ex. by assigning the action button on your phone to invoke ChatGPT voice). It’s a modality that lends itself well to brainstorming and can be really useful while driving, cooking or pacing around the house with a newborn baby. You can have a full conversation on any topic, go back and forth, brainstorm and then at the end ask Chuck to summarize and convert into any sort of document. Unsurprisingly, the ease of long conversations in this modality has increased my average conversation length:
Average length per conversation
A warning to all: Be careful, sometimes I feel like I’m the guy on the phone with Jake from State Farm when my wife catches me talking to ‘chuck’ with my airpods on…
When I used AI to analyze the content of the transcripts, I was able to see the relatively even split between work and non-work use cases.
ChatGPT use by category (misc. topics excluded)
AI fluency comes from engraining it into all aspects of your life. My adoption in non professional categories has been critical to my professional usage in building fluency and understanding of the contours of AI strength and weakness. AI today can do some things incredibly well, like ace PHD level biology assignments, and then struggle at other tasks humans would find incredibly simple like categorization. I like the metaphor of a jagged frontier of capabilities, where if you look across a plane of equally difficult tasks, AI struggles with some and easy accomplishes others:
Source: Ethan Mollick, my hero
Understanding the contours of that jagged curve is critical to unlocking value. The magic of AI comes from understanding the jagged frontier enough to combine the right use case, prompt and data to the right model or application. Today I feel a certain degree of fluency in my conversations that I didn’t feel a year ago - I can predict when LLMs will be able to excel in answering a prompt and when it will fall flat on its face.
And I think we have reason to be optimistic, if for no other reason than the fact that Chuck has had less of an excuse to apologize to me lately. Those apologies are a combination of me testing the limits of what AI can do and also learning myself how to better communicate my objectives.
ChatGPT apology frequency (ex. “I’m sorry”, “whoops”, “I apologize” etc)
This data exercise showed me how valuable my history with ChatGPT can be, but also how inaccessible it is. The data itself exports as a nested JSON file that took a lot of manipulation (and help from a data scientist friend) to access – none of the existing consumer LLMs could even handle the data volume without significant summarization.
ChatGPT log export format (word doc format available without metadata)
Unfortunately, these historical conversations are NOT accessible to the LLMs themselves (despite what their marketing might indicate). Asking ChatGPT detailed questions about past conversations yields pretty uninspiring results or hallucinations:
Its important to remember that LLMs in their current architectural form DO NOT have a concept of memory or access to prior conversations. In fact, the only thing ChatGPT has access to is its very surface-level log of summary “memories” that are not terribly well curated and lack a true depth of understanding. I suggest auditing your memory, it can be a very odd (and potentially dangerous) snapshot of your life.
What my ChatGPT memory looks like.. I’m letting you in to the darkest corner of my life here..
This will all change in the next couple years. The LLM architecture will evolve and memory will become a core component; where each interaction drives learning and additional value/stickiness. We are going to continue letting AI’s deeper into our lives and realize a ton of incremental value through that.
Dharmesh speaks of the future
In the meantime, I would recommend exploring your memory and exporting your ChatGPT (or Claude, or Gemini or whatever friend you spend the most time with ) history in order to better understand your own usage patterns, the split between personal and professional and opportunities to better access your own data goldmine.
AI Dad joke of the day:
Q: Why did the robot get upset at work? // A: Someone kept pushing its buttons.
To see what ChatGPT stores as memories: Settings > Personalization > Memory > Manage
To export your entire ChatGPT history: Setting > Data Controls > Export Data > Export