• Watson and Moriarty from "Watson." Image source: CBS
    opinions,  television

    Watson (2025 CBS) is a lukewarm Sherlock Holmes adaptation

    The old CBS show “Elementary” is among my favorites. It might be my ultimate comfort watch. Lucy Liu and Jonny Lee Miller deliver pitch-perfect performances for seven straight seasons, and while it has its rocky story moments, no singular season is skippable, the themes are compassionate, and the stories are engaging.

    “Watson” is from the same network and the same showrunner. I didn’t feel the need to fill any void after “Elementary,” but I figured I might as well try its spiritual successor.

    In this adaptation, Holmes has fallen of Reichenbach Falls with Moriarty. Watson jumped after them in a rescue attempt. As far as Watson initially knows, he’s the only survivor.

    Doyle’s characters show up in various adapted forms, as with “Elementary”: Clyde is a robot rather than a turtle, Shinwell is a series regular, and Mary Morsten is an ex-wife instead of a boyfriend who gets poisoned. After watching so much “Elementary,” it’s a little jarring to see these characters revised in new ways from the same mind. You can tell the creation is by the same person with the same interests, but he shifted everything to the left.

    An extremely familiar story with new faces.

    “Watson” manages to stay fresh by focusing on medical drama instead of a police procedural. The cast is populated with Watson’s younger doctor apprentices, who are talented, genius, and so disinteresting to me that I can’t tell you what any of them are named. I guess I could look it up.

    All the actors rise well to the occasion presented, but Morris Chestnut is the main standout. His performance is always majestic. The man has gravitas. His Watson is concerned with medical justice as much as anything else, and he will make non-medical things his problem if injustice has been done.

    Full-season spoilers ahead.

    Moriarty is present throughout the entire season, here and there. I never got the feeling he was a powerful mad genius with amazing powers of deduction and manipulation. He was just a guy who kept blackmailing people into bad behavior.

    Gene editing allows Moriarty to target Watson’s team. Maybe this would have been more compelling if I cared about his team. By the time the twin doctor guys got sick, I was actually just hoping it would kill one of them off. I didn’t like the performance by that actor as two different guys. They weren’t very distinctive to me outside whether or not they were wearing glasses. He mumbled through all his lines. Knocking one of them off would have made sense and given the actor time to focus on developing just one of them.

    Alas, it was not to be so. All the good guys were saved. We rolled toward the end of the season leaving me feeling mildly entertained – not excited, but also not really dissatisfied.

    My opinion crashed and burned at the end.

    “Watson” ultimately lost me on its last episode. At the end, Watson chose to kill Moriarty via a fatal stroke. His argument was that Moriarty was too dangerous to let live.

    Nothing I saw from Watson to that point suggested he would be willing to kill a man, even one who demonstrably deserved it. Watson fanboys for his “dead” BFF Sherlock Holmes, but he doesn’t seem especially vengeful; he’s focused on medicine, not going after Holmes’s murderer. Watson history with the military was downplayed because he was always more doctor than soldier. He helped Irene Adler, knowing she was likely manipulating.

    He always erred on the side of doing no harm.

    Ethical gray areas don’t seem to imply a willingness to kill, either. It wasn’t like he bonded with his favorite student (whose name I still don’t remember) because she killed her father. A moment where he heatedly announced, “I would have killed him too,” might have been enough to convince me.

    Nothing like that happened. At least, not that I recall. I slammed the season in about two days – details may have escaped me.

    I thought about rewatching the season to see if they supported this character moment in ways I didn’t observe. But this is the ultimate indictment of “Watson”: I didn’t like it enough, care about the characters, or feel any desire to spend time rewatching it. At all. Even to analyze the story better.

    It feels like they just wanted to have Watson do the practical thing and kill the bad guy. It’s a long discussion in various fiction circles: isn’t the most compassionate thing a hero could do is kill a dangerous villain? Why doesn’t Batman end the Joker’s reign of terror and kill him instead of sending him to Arkham Asylum?

    I don’t buy into “Good Guys Just Don’t Do Things Like That” as an argument. But this feels like a totally unsupported character moment that exists only to say, “See? Batman really should just kill The Joker.” (Although in this case, Watson is arguably more Nightwing than Batman.)

    It’s hard to imagine anyone getting too attached to “Watson,” but let’s give it a chance.

    Lukewarm reaction aside, “Watson” deserves more seasons. I watch a lot of TV shows through the course of my day. I always have something going while I play video games, clean house, practice illustration, etc. Thus I  can authoritatively say that it’s normal for the first season of a show from any era to kinda suck.

    A wise network is one that sees the good points – of which “Watson” has many – and chooses to nurture it through an awkward phase.

    Rochelle Aytes is a great Mary Morsten who actually survived the season (which I didn’t expect). Clear ambiguity around Moriarty leaves opportunities for more stories with him, even if he’s dead.

    They cast a wonderful voice to portray Sherlock Holmes, and I would chop off my favorite toe to see Matt Berry in a homosexual love spiral with Morris Chestnut.

    Although I didn’t really care about the younger doctors, there’s a lot of sapphic potential in the tension between the two women that would make me happy to return to them.

    And who knows? Maybe they’ll knock off one of those twins and make me really happy.

    You can stream Watson Season One on CBS All Access.

  • opinions

    Does ethical generative AI usage exist?

    Generative AI remains a polarizing topic of conversation. It feels like everyone has a strong opinion, and it’s either “love it, use it” or “hate it, deride it.” I often fall on the side of “NO, no way,” but I keep vacillating between that and “why bother?” and also “eh, maybe?

    As background: I initially played around with genAI for image creation but stopped once I realized that the datasets were produced by scraping the internet without regard for the creators involved. The idea that huge companies are profiting off of this sticks in my craw.

    I can’t get over the way this helps steal labor and consolidate more wealth in the ruling class. I can’t understand how AI proponents don’t see the way this is the worst manifestation of our society in hyperdrive.

    That said, I also find I cannot stand uniformly opposed to generative AI, either. I’m usually an early adopter to tech. I love the ways that technology have changed and grown throughout my lifetime.

    What we now call “AI” has become an extremely large basin holding an extremely diverse array of technology. A lot of these uses are harmful. Others are useful. I don’t want to throw the robot baby out with the AI slop water.

    Adobe Photoshop has had “content aware fill” for a long time now. The idea is that you select a part of an image, and Photoshop will fill in what it expects should occupy that space using the pixels around it. I’ve been using content-aware fill since at least Photoshop 2019. For personal photos, this has been a quick way to remove things like telephone poles from the sky. What it produces is not much more sophisticated than a clone stamp. I class this as a useful tool–though it’s also not generative AI.

    Later, Adobe added “generative fill,” which is like content aware fill on steroids. It uses their family of generative AI models, called Firefly, to create a more complicated image that the average person will recognize as GPT-like.

    According to Adobe’s FAQ about Firefly, the dataset is trained on public domain images and its large stock library. If you’ve contributed to the stock library, your work is in the dataset. This is part of the terms of service, although I don’t know how and when this usage was added.

    How well did they inform contributors? Did everyone know in advance what this would mean? How many contributing artists belong to agencies whose managers made those decisions above their heads?

    It’s hard to say, but it’s still better than Meta deciding it’s fair use to steal from authors because their books have no value.

    Where does Microsoft’s experimental AI engine fall on this spectrum?

    John Carmack, a game dev elder and co-originator of the most classic boomer shooters, describes this AI use as a useful tool. I’m not inclined toward authoritarianism — here meaning that an authority’s opinion is only an opinion, and not above scrutiny — but I think he’s right that AI algorithms will be a growing part of the workflow.

    I tried playing Quake II (one of my all-time favorite shooters) in this format. It’s obviously not yet a playable commercial game. It’s dreamy, foggy, and forgetful. At best, we can say it’s recognizable as the original game, and you can move inside of it.

    But this is the first time I’ve glimpsed something that genuinely feels like a future successor to current game engines. Can it become markedly better? Is the output always going to be worse? Will it deprive game devs of jobs? I don’t know yet. I do find it interesting, though. I don’t want to discount it out of hand.

    Ultimately, I evaluate individual tools falling into the AI bucket like this:

    1. Does it reduce desirable work for creatives? I don’t think it’s a big deal when people use AI models for silly little personal projects. I don’t care if people want to see their face on a Bridgerton character. Also, using generative tools to make photo editing slightly less onerous will make a creative’s life more pleasant. I can’t tell you how grateful I am that it’s easier to isolate models and remove that one patch on their jacket sleeve these days. I don’t miss clone-stamping and painting that stuff away.
    2. Does it disrupt necessary career progression? Replacing junior developers with AI blasts apart the entire industry in counterproductive ways. How does a senior developer become a senior developer if she can’t make a living as a junior developer first? How will artists ever graduate from Art As A Side Job to Art As A Full Time Job if all the jobs freelancing are replaced by people whipping up some muddy AI garbage?
    3. Does its usage on an individual level harm other creatives? Using datasets with material that artists did not consent to including is harmful. When you are generating art in many models — especially when you use specific artists’ names in the prompts — you are personally inflicting harm on creatives. Whole-image generation is likelier to resemble preexisting work by a single artist, which should be regarded as plagiarism. Adobe Firefly is maybe okay to use where you would otherwise normally use Adobe Stock, although it’s a horrible gray area.
    4. How significant is the environmental impact? As with most environmental issues, individual use is never as impactful as institutional use. Microsoft’s Copilot has been forced onto vast numbers of desktop computers and devours computing power whether or not you like it. On the other hand, Apple’s on-device AI performs processing on your phone, and it might demand charging your phone slightly more. It’s probably not a big deal. And an individual doing a couple of prompts is a drop in the ocean compared to Google adding AI processing for every single search engine query. Over time, as processing power becomes overall cheaper, environmental impact will decline. It’s alarming right now. It will improve.
    5. Does it do a good job? Whole-image generation can look as glossy as you want these days, but the algorithm doesn’t have intention, purpose, and an individual’s life experience to create a distinctive lens. Even when you’re editing out errors, you’re still leaning on the most bland, generic, commercial imagery that is possible. Things tend to look plastic. Women are homogenized into their most offensively attractive forms. And I still haven’t read any AI-generated text that isn’t a circuitous, unfocused, tension-free disaster of word soup. Trying to make AI output usable takes just as much work as making the thing yourself.

    I’m sure I’m missing a few points, but these are the ones off the top of my head. And with this litmus test applied, there isn’t a ton of common AI usage that I would consider appropriate.

    But there is some.

    Many artists agree that using AI to generate mock-ups is no big deal. Anything where an individual isn’t putting AI into a final, sellable product is probably okay. AI that makes parts of unappealing labor go faster (like finding the exact code you want in a library) is helpful rather than thieving. These are natural progressions of existing technology, and they will become less damaging as environmental concerns are addressed and (hopefully) more datasets are made with material provided consentingly by accredited, compensated creators.

    The very fact I believe “appropriate AI usage” is up for personal evaluation makes me feel more generous to all the individuals involved. We’re all trying to figure out how to navigate a complicated world. The problem isn’t really the technology, but the fact a sickly society can only use new tools in sickly ways.