Revisiting Dijkstra in the Age of Copilot and ChatGPT
Let's reflect on Dijkstra’s essay "On the foolishness of 'natural language programming" in light of modern natural language processing (NLP).
Dijkstra was writing in 1978, long before modern NLP and large language models (LLMs) like ChatGPT or GitHub Copilot ever existed. If we look at his concerns in light of today’s tech, here’s a balanced take:
Where Dijkstra still has a point, and maybe always will:
1. Precision still matters.
2. Natural language is still ambiguous.
3. Formalism still wins in the end.
Where the world has moved beyond Dijkstra’s fears:
1. AI doesn’t replace programming, it enhances it.
Tools like Copilot or ChatGPT don't do away with the need for formal languages but they help you use them better. As someone said think of AI as an 'autocomplete on steroids", not a replacement for logic or structure.
2. Natural language interfaces are becoming useful.
Today, you can describe a task in English and get starter code or a script. Especially for non-programmers or rapid prototyping, this is a huge leap forward.
Example: "Write a script to back up my files to Google Drive every night" - a modern LLM can get you 80% of the way there.
3. Error handling is better.
DSLs: The Dijkstra-Approved Middle Ground?
One area where Dijkstra’s fears meet a practical solution is in domain-specific languages (DSLs). While he strongly rejected the idea of full-blown natural language programming, DSLs represent a clever compromise - they look more readable, often mimicking natural language patterns, but remain formally defined and tightly scoped.
You can think of DSLs like SQL, CSS, Terraform, or even Markdown - each is a miniature language tailored to a specific domain. What they are basically designed to accomplish is reduce the complexity of general-purpose programming but without sacrificing the precision Dijkstra insisted on. Their syntax is constrained enough to avoid ambiguity, yet expressive enough to get the job done within their niche.
Here’s where things get even more interesting: modern NLP models like ChatGPT can now translate natural language into DSLs. For example, a user can describe a data transformation task in plain English, and an LLM can output valid SQL or a pipeline configuration. This hybrid approach plays to the strengths of both sides:
- Natural language for intent, and
- DSLs for execution.
In other words, we’re not asking machines to "understand" language like humans do - we're using AI to bridge the gap between fuzzy human thought and precise machine instructions, all while standing on the solid foundation Dijkstra would respect.
Bottom line:
Dijkstra was right to be skeptical of using messy human language for precise computation - that’s still a huge risk. But what we can be sure of is that he didn't foresee that NLP would evolve into a layer on top of formal languages, helping people express intent faster without having to do away with the underlying structure.
So while full natural language programming still carries the risks Dijkstra warned about, DSLs - especially when combined with AI - may be the most realistic path toward usable, intuitive, yet still disciplined software interaction.
Comments
Post a Comment