When it comes to AI and writing — really, with most things AI — I find myself oscillating between fright and delight.
There is growing evidence that relying on AI to write can erode critical thinking skills, particularly among developing writers. I wrote about this in an op-ed last year, and a recent MIT paper confirmed some of my concerns: students who relied on AI to write showed reduced neural activity, lower memory recall, weaker sense of ownership, and less original thinking compared to their non-LLM-using peers.
At the same time, in the hands of developed writers, AI can be a powerful, though imperfect, ally in the editing process. Much has been said about “vibe coding,” the process of using natural language to describe an app to life. Lately I find myself “vibe writing” — engaging in rounds of dialogue with AI to turn garbled ideas and prose into something more sensible and coherent. At its best, it can stimulate and simulate meaningful cognitive activities involved in writing and editing, so long as the writer maintains agency throughout. (This piece is not about the sort of shortcut AI writing that cheapens the experience and end result.)
As people start developing companionship-like bonds with chatbots that are becoming more expressive and socially responsive, it is not a stretch to see these dynamics extend to the relationship that writers have with AI editors, as they work through voice, clarity, flow, and other mechanics of composition. The kinds of hard questions that good (human) editors ask, the thoughtful resistance that good writers push back with, and the back-and-forth are all now becoming possible through writing tools like Lex.
Having tried different AI editors over the past year, I’ve found some approaches that work, drawing from skills, self awareness, experience and confidence that I’ve had the fortune to develop over time, with teachers and editors. These approaches also reflect a timeless set of competencies that I believe are crucial for future generations of writers to develop, as AI becomes part of our everyday communications and interactions.
1. Lay out as many puzzle pieces yourself at the start.
The most important element is to get as many puzzle pieces on the board without turning to AI for help. This approach is born from my hesitation — now backed by growing evidence — that relying on AI can become a crutch for thinking. The goal is simple but crucial: get the big ideas down and try to build the bridges between them, as shaky as they may be.
There are different ways to gather these pieces. I’m old school and comfortable with throwing words onto a blank document. Others like my colleague Enzo (a non-native English speaker) start by assembling decks and notes created in the course of his research. Only after developing these materials, and a rough sense of how everything should fit together, does he then ask AI to propose an outline. Regardless of method, that initial work of gathering your thoughts should be done with as little AI assistance as possible. The more hard thinking that happens upfront, the better. Asking for help can always come later.
2. Get specific about what’s bugging you.
Rough drafts are just that — rough. Knowing why and how things are a mess, and what one doesn’t like about how words and sentences are coming together, is important to recognize. Even better: knowing how to articulate these dissatisfactions to guide AI editors towards suggestions that help writers move from chaos to clarity.
A common problem for my first drafts are convoluted sentences and repetitive ideas, so I often frame my asks for AI with comments like:
- “this part just feels too redundant with the points made…should I kill or reference using another example…”
- “that second sentence in the last paragraph is wordy and clumsy and repeats some key words used before…”
- “i like a metaphor here but mine just feels so cliche…
3. Ask for “lightweight” suggestions.
Be deliberate about instructing AI to give minimal editing suggestions and preserving the original ideas and flow. Sometimes, it helps to specify what kinds of recommendations that you don’t want.
AI editors — especially off-the-shelf tools like ChatGPT — pick up on the subject matter quickly and default to suggestions based on the corpus of related content that it’s been trained on. For the kind of writing that I typically do here at Reach Capital, it can resort to cloying cliches and buzzwords that are all too common in the venture capital and startup content marketing mill.
- “give me lightweight suggestions that can improve the flow of this sentence without changing the structure or voice.”
- “that is too much, let’s summarize in just one paragraph with the same length as the others.”
- “pls refrain from overused hype words like disruptive”
4. Disagree (a lot).
When AI offers suggestions, many creative decisions remain. Which ones are relevant? Does it really belong here or somewhere else? Does it make sense in this context? Does it sound like me? Is this what I’m really trying to say?
As an editor, I try to be mindful of is injecting too much of my voice and style. AI struggles with this. Personally, I usually reject more than half of AI feedback. Sometimes, in its effort to make sentences more precise, it kills too many words and neuters my voice. And this should also go without saying: if AI is inserting assertions or claims — or, more dangerously, adding citations and references — please for the love of God check the sources.
In a different AI writing study, researchers found that students who frequently modified AI edits “consistently improved the quality of their essays in terms of lexical sophistication, syntactic complexity, and text cohesion.” By contrast, those who merely accepted whatever AI fed them wound up with poorer work.
Disagreeing with your editor, whether human or AI, is a healthy exercise. Having partaken in contentious editing cycles on both sides of the table, I’ve seen how heated debates can lead to better results. But recognize that AI editors, like any AI chatbot, have sycophantic tendencies and will cave in after being harangued enough. The key is knowing when pushing back serves the writing, versus just trying to win an argument with a machine that will eventually acquiesce.
Solving the Writer’s Dilemma
These tactics underscore the importance of The Big Question I shared up top: How can we minimize the use of AI as a crutch on thinking, while equipping students with the competence and confidence to use it as a productive writing partner in authentic and meaningfully challenging ways?
This is top of mind for us at Reach, and several companies in our portfolio are taking different tack. Curipod guides students through exercises to reflect on and revise AI-generated feedback. GPTZero’s AI detection and citation-checking tools help educators, students and everyday writers reflect on responsible AI use and preserve human authenticity in writing. Newsela Writing offers automated writing prompts, feedback and rubric-based scoring on student work.
Beyond these tools, there are opportunities to prepare students not just to write better, but to facilitate the kinds of writer-editor relationship I described above. Experiences that:
- guide students to articulate what they don’t like about their drafts, and ask specific questions;
- help them thoughtfully push back against suggestions while preserving their voice;
- foster the kind of dialogue that makes writing feel like a partnership rather than just accepting machine output;
- build confidence in knowing when and where one actually needs AI assistance.
As more educators and entrepreneurs tackle this challenge, we’re hopeful to see solutions that prepare students to have the kind of rich — and occasionally contentious — relationships with AI that preserve the authentic grit and joy of the writing process. If this is something you’re building, I’d love to hear from you!
Thank you for my fellow human colleagues Jennifer Carolan, Jim Lobdell, James Kim and Enzo Cavalie for sharing your experiences and reviewing earlier rough drafts.