As someone who works with technology as part of his day job, I’ve had a front-row seat to the turbulent chaos that occurs each time a new disruptive advance is tossed into the universe. When I was just starting out in IT, the gray beard training me told about the mass retirements that had occurred when the organization we worked for replaced IBM Selectric typewriters with PCs running the first versions of word processing software; I’d naturally laughed at what had seemed like an apocryphal tale, but as I enter the final stretch of my career, I’m starting to realized I’m seeing an echo of that very trepidation as I warily eye what’s coming down the pike at me.
While I’m not likely to up and retire any time soon over a brilliant new version of Word, I have become increasingly concerned over how smart that and other tools I use daily have become. Case in point: a few months ago, I was blithely writing yet another email to someone, one of those endlessly soul-crushing parts of your administrative day that often never seems to end. As I tapped out a response to some query from a colleague, my eyebrows shot up when Outlook began to suggest the rest of the sentence I was crafting. I’m used to autocomplete — hell, half of my text messages would be illegible without it — but I wasn’t prepared for Outlook to literally finish my sentence as though we’d been a married couple for more than thirty years.
And yet, the suggestion was spot on — almost like it knew how I would phrase something, including my unique usage of semicolons and dashes. In a flash, I realized that what I thought had been an obedient technological servant had, in fact, been quietly scooping up everything I’d ever done and then promptly learned how to (at very least) sound like me. It was unsettling enough that I tracked down how to disable the feature and then unplug the giant vacuum in the cloud that seemed to getting smarter with every keystroke.
I’ve had even more hesitation in the application development software we use at the office, Microsoft Visual Studio. About a year ago, they introduced something called Intellicode. At first, it was actually a nice improvement over what we used to call Intellisense — an older system that made suggestions but only in a very high-level sort of way. Intellicode had a bit more AI, and was capable of understanding the context of what you were doing before making any sort of suggestions.
This past spring, it took the same sort of unsettling leap that Outlook made on me. Suddenly, as I typed out my algorithm Intellisense began trying to guess where I was going and then helpfully ghosted in the rest of the code. No longer was I getting a line or a variable; no, this sucker was literally trying to do my job. And, if I’m being honest, it actually did a pretty good job of it, too.
There’s an episode of classic Star Trek where Captain Kirk has to take a backseat to a computer that proves it can command the Enterprise better than a human; for a few months now, I’ve begun to wonder if people in my position — professional application developers — are going to begin to fade away, replaced by algorithms that were trained against our decades of hard work and experience. One of the pleasures I gain from doing the work I do is the creativity I can use when solving a problem for a client; it’s a bit sobering to realize that at some point in the near future, my clients will be able to ask the software itself for any changes they need.
Is it more efficient? Probably. Would the solution be as good? Most likely. Will I be needed? About as much as that Selectric typewriter.
I used to think that time was way out into the future, but as fast as tools such as ChatGPT are advancing, now I’m not so sure. I feel more certain that I might be the last of a breed, someone who will be replaced by a new class of professionals who will become the whisperers to this new tech — ones who know just what to say in order to tease out the sort of complex solution I would have created in the past.
I worry in a similar way about the art of writing; there’s been a similar rise in the use of this smart technology in creating novels, plays and movies with minimal human interaction. Feed in the ten or so plot points you want and within minutes you have a first draft that — as crazy as it seems — winds up being pretty readable, if not predictably formulaic. I know this is one of many reasons the WGA is currently on strike out in Hollywood; I share their horror that there is a very real possibility episodes of our favorite televisions shows will be plucked out of the ether by a bot, no human intervention required.
Maybe I’m overreacting; then again, I’ve been in this field long enough to think I’m not. Change is usually good, and usually I’m on the side of progress. For the first time, I’m actually wondering if we’ve misdefined the term completely…