In 2024 Apple released an ad for the iPad Pro. A hydraulic press slowly destroys a grand piano, paint cans, cameras, books, guitars, trumpets, record players — physical manifestations of human creative expression — as they are compressed into a thin slab of aluminum and glass. They intended to communicate that all you need is an iPad to access all these art forms, that the iPad was the ultimate window to access creativity.
However, people did not take the ad this way. Instead it became a symbol of a general discontent with technology. Instead of all these physical, specific, intentionally designed tools and mediums, we get a blank featureless slab that doesn't do anything particularly well and mostly turns into an endless consumption device. The backlash was severe enough that Apple issued a public apology. Hugh Grant called it "the destruction of the human experience." Apple pulled it from TV.
The company that set itself the goal of creating "the bicycle of the mind" has lost its way to the point where it now is putting out messaging visualizing how they crushed and compressed the human experience into a box. This is not just an Apple story. It is the story of Silicon Valley over the last three decades — a slow forgetting of what technology was originally for: not to shortcut human activity, but to empower humans to do things that had never been done, to build mediums that had never existed.
Many people love to throw the criticism that tech bros missed their humanities classes in college. But that misses the point. Reading Kant or Heidegger, or taking an art appreciation class — which most of these people have probably done — does not teach you what actually got lost. It is something more basic: the belief that the most valuable thing technology can do is not remove effort from existing activities, but enable entirely new ones. That the goal is not shortcutting the human, but empowering them. That efficiency is not the only axis worth optimizing on, and may not even be the important one.
There is a specific post-60s cultural cocktail that produced the best of early tech. Countercultural utopianism. New Age metaphysics. A real engagement with art, music, and literature. Stewart Brand and the Whole Earth Catalog. The Grateful Dead as early adopters of the WELL. John Perry Barlow's Declaration of Independence of Cyberspace, which reads as naive now but was animated by a genuine belief that the network could be a space for human liberation rather than corporate extraction and efficency gains.
Technology is a tool for expanding human capability and consciousness, not replacing it. And crucially — the act of using the tool was supposed to make you more of a person, not less of one. The destination and the journey were both the point. Apple under Steve Jobs centered itself in this philosophy. The Mac's proportional fonts came directly from a calligraphy class he auditied. His obsession with industrial design wasn't aesthetic vanity; it came from a belief that how an object feels in your hand is part of what it communicates. He described the computer as a "bicycle for the mind" — a tool that makes the human more powerful, not a substitute for the human. The metaphor is precise. A bicycle doesn't choose your destination or walk for you. It applies leverage to the effort you were going to spend anyway. Jobs cited a study he'd read as a child: scientists measured energy per unit of distance across animals, and the condor won — most efficient locomotion on Earth. The human, without tools, was mediocre by comparison. But put a human on a bicycle and they blow past the condor by a factor of three. The computer was that, for the mind — not intelligence replacing yours, but a multiplier on the intelligence you bring to it. The person still has to pedal. The person still navigates. The agency stays human; the reach extends.
More concretely, Jobs understood that some of the most interesting territory in tech was where technology met artistic production. The 1996 Charlie Rose interview with Jobs and John Lasseter is worth watching in full. Charlie Rose asks the question you'd expect: does this make movies faster? Cheaper? Lasseter reframes it completely.
"We don't look at it as a way to replace any of the creativity or any of the real art of film making. We look at it as these are just a great new expensive pencils. It's artists using these computers as though an artist at Disney uses a piece of paper and a pencil."
He then goes further. "It's a very common belief that when computers come into a new artistic medium, it's either less expensive, takes less time, or it's going to replace something. It reminds me a lot of when photography was invented — everybody thought that painting was going to get replaced." The computers aren't cheaper. They aren't faster. They're a different medium that lets you tell different stories.
Lasseter is describing a pattern that recurs with every new medium. The Roland TR-808 drum machine was a commercial failure when it launched in 1980 — it didn't sound like real drums — and was discontinued after two years. Then hip-hop and electronic music got hold of it, and the 808's synthetic kick became the sonic foundation of an entire half-century of popular music. MIDI was supposed to make musicians obsolete; instead it gave composers direct access to the full palette of orchestral sound from a bedroom. Desktop publishing — PageMaker, Quark, a laser printer — was going to destroy graphic design by letting anyone make a layout; instead it produced entirely new visual aesthetics and democratized a craft that had been locked behind expensive typesetting equipment. Digital audio workstations and cheap recording interfaces collapsed the cost of a studio to near zero, and rather than flooding the world with garbage music, they enabled genres — lo-fi, vaporwave, bedroom pop — that could only have come from individuals making things alone at odd hours with inexpensive tools. Digital photography was not real photography — no grain, no chemical process, no darkroom. Every time, the argument runs the same way: the new thing will either replace the old thing or cheapen it. Every time, the new thing becomes its own medium with its own masters, its own aesthetic vocabulary, and a body of work that could not have existed any other way. Jobs, who got into Pixar initially because the technology "blew him away," lands in the same place after a decade alongside Lasseter: "all this technology really is just in the service of the storytelling. The computers don't do the animation — they do the drawing."
That distinction matters enormously. Pixar's technology — and ILM's before it — was developed in service of a creative vision. The renderer existed because Lasseter had something he wanted to show that no existing medium could show. The technology was downstream of the art.
The proof of this is in the specific things Pixar chose to build. In Toy Story 4, the cinematography team implemented split diopter shots — a technique from live-action filmmaking where a split lens holds two subjects at very different distances in simultaneous sharp focus, used by De Palma, Spielberg, Gordon Willis. In CG you have no optical physics forcing your hand. You could just render everything sharp. Pixar implemented split diopter shots on purpose, because they were thinking like filmmakers, not like computer scientists. They were asking: what does this scene need cinematographically?
The Good Dinosaur did similarly demanding work on crowd and herd animation — not because the technology required it but because the story required convincing scale, mass, living ecosystems of creatures. The question was always what the scene needed, and the technology answered.
WETA FX today still operates this way: Joe Letteri and his team build tools because directors like James Cameron have specific things they need to express. The tech doesn't drive the story; the story drives the tech.
GarageBand is another example. When Jobs introduced it at Macworld 2004 with John Mayer on stage, the demo was a musician using a tool to make music. Mayer plugged in his guitar and played — but that was the demo, not the point. The point is that anyone with something musical in their head could now get it out. You didn't need to play. You could build loop by loop, note by note. What you needed was intent. The software removed the barrier between the idea and the listener — it did not remove the human with the idea.
This is what it looks like when technology is genuinely human-centered. The tool amplifies the human. It does not replace them. And the process — Mayer practicing, the Pixar animator studying light, the engineer and the artist arguing about what a scene needs — is not inefficiency to be removed. It is the work. It is what produces people who know what they're doing.
Silliwood — the 90s portmanteau for the convergence of Silicon Valley and Hollywood — was a brief, mostly failed attempt to formalize this intersection. Most of those companies were, as Jobs put it, places that existed "so they can raise investment, but they never produced any products." Pixar was, in his view, the only enterprise where the meeting of Silicon Valley and Hollywood had actually produced something. The reason Pixar worked and Silliwood didn't is the same reason: Pixar had Lasseter, and Lasseter had something to say. The technology followed the vision, not the other way around.
Somewhere in the 2000s-2010s, this tradition died quietly.
The dominant culture of Silicon Valley stopped growing out of art. It grew out of B2B SaaS. The founding mythology shifted from "bicycles for the mind" to "growth hacking," "enterprise ARR," and "product-market fit." The countercultural aesthetic got replaced with the aesthetic of the McKinsey deck: clean sans-serif fonts, white space, and an earnest belief that every human problem was an optimization problem.
Part of what drove this shift was structural. Venture capital developed a playbook good at evaluating one kind of bet: a measurable improvement on something that already exists. X% faster, Y% cheaper, growth rate, annual recurring revenue. These are legible — you can model them, defend them in a partners meeting, show them to LPs. The kind of bet Lasseter represented is much harder: a new medium whose economics cannot be modeled because the audience does not yet exist, whose value will only be evident after years of making things in it. Jobs funded Pixar personally for nearly a decade before it produced anything — close to $60 million of his own money, in a company with no revenue and no proof of concept for the medium it was trying to build. That kind of patience, and that tolerance for illegible upside, is structurally incompatible with the incentives of a modern venture fund. What gets funded is what fits on a pitch deck: productivity gains, market size, churn reduction. New mediums don't fit on pitch decks. They require an investor to imagine something that doesn't exist yet and believe, without evidence, that people will want to inhabit it once it does. That takes a different kind of bravery — and the industry optimized it out.
There's a version of this story that keeps repeating in computing specifically. VisiCalc — the first spreadsheet, 1979 — automated the work of bookkeepers and accountants who spent careers doing by hand what a computer could now do in seconds. Word processors eliminated the professional typist. CAD software eliminated the draftsman. Desktop publishing eliminated the typesetter. Each time, the argument from the technology side was pure efficiency: the computer does this faster and cheaper, therefore the human who used to do it is no longer needed. And each time, that was partially true and wholly incomplete — because the people being displaced weren't just doing the mechanical task, they were holding craft knowledge, judgment, and aesthetic sensibility that the software could not encode. When you fire all the typesetters and let the marketing team do layout in PageMaker, you save money and you lose something harder to name.
More than aesthetics: the fundamental question changed. It used to be what does this enable a person to do and become? It became what friction does this remove? Technology stopped being an end — a thing worth engaging with because of what it does to you — and became purely a means. A pipe. The output is what matters; the human in the middle is a cost to be minimized.
This logic does not stay in enterprise software. It leaks into every consumer product built by the same culture. DoorDash, Uber Eats, Instacart — the pitch is pure friction removal: food appears, you did not have to cook, you did not have to go anywhere, you did not have to talk to anyone. The convenience is real. But cooking is not just calorie acquisition. It is a skill, a ritual, a way of caring for people, a form of creativity with immediate sensory feedback. The process of going to a restaurant is social, spatial, cultural — you are somewhere, with someone, the meal is an event. What gets optimized away is not just effort. It is the texture of daily life, the practices through which people develop competence and connection. The app does not ask what cooking does for you. It asks how fast you want it gone.
The same logic governs attention. TikTok removes the friction of choosing what to watch. The algorithm decides; content arrives; you never have to seek, discover, or exercise taste. But finding things — building a sense of what you like and why, following a thread of curiosity until you've learned something, being bored enough to go looking — is not inefficiency. It is how you develop taste. Instagram removes the friction of staying in touch, and replaces actual connection with broadcast. The friction of calling someone, having a meal, being present — gone. What's also gone is intimacy. More broadly, social media removes the friction of boredom. Idle moments used to be spaces for thought, daydreaming, conversation. Now the phone fills them instantly. The space in which ideas form gets optimized away. The For You Page does not ask what watching does for you. It asks how long it can keep you there.
The trajectory of Nest captures something sharper: the efficiency-first culture doesn't just produce this kind of product — it absorbs even companies that started elsewhere. Tony Fadell — who had led the iPod team at Apple — founded Nest in 2010 to build a thermostat. The original Nest Learning Thermostat was a round, beautifully machined object with a satisfying physical dial. But the meaningful design decision was behavioral: it didn't ask you to program a schedule. It watched you for a week and learned your patterns. The human's life, habits, and comfort remained the point; the thermostat served them without demanding you become an HVAC programmer to use it. That is the bicycle — leverage applied to what you were already doing.
Google acquired Nest in 2014 for $3.2 billion. The thermostat became a node in an ambient home intelligence system: a data point integrated into Google's identity and advertising infrastructure, eventually absorbed into Google Home. The pitch shifted to "set it and forget it." But "forget it" is the tell. The goal became to remove you from the loop entirely — stop thinking about your home, stop interacting with it, stop being present in it. The friction being removed was also awareness. The tool stopped amplifying your preferences and started making decisions on your behalf, invisibly, in service of a platform whose actual customers were advertisers. Fadell left Google in 2016.
This is the culture that produced the Crush ad. The team that made it had a mental model where compression = good, efficiency = good, doing more with less = good. The hydraulic press was not violence — it was elegance. They had lost the ability to see what they were destroying, because the things being destroyed — the process of playing piano, of developing your eye with a camera, of writing something difficult and slow — had already stopped mattering to them.
Now we have LLMs, and the confusion has deepened.
The pitch is: "everyone is a creator now." Everyone can generate images. Everyone can write. Everyone can compose music. The technology democratizes creative production. This is not subtext — it is the explicit framing from the people building these systems. Jensen Huang at the 2024 World Government Summit: "It is our job to create computing technology such that nobody has to program. And that the programming language is human. Everybody in the world is now a programmer." Sam Altman has said entire job categories will be "totally, totally gone" as AI agents join the workforce. Dario Amodei has warned that AI could eliminate half of entry-level white-collar roles within five years. The framing is not "here are superpowers for humans." The framing is "here is a replacement for humans, and the efficiency gains justify it."
These are metrics that make sense on a spreadsheet. Efficiency. Headcount reduction. Cost per task automated away. What they don't capture is what humans actually care about: whether the work means something, whether they're growing, whether they made something true and got it into the world. The countercultural tradition that produced the best of early tech cared about those things because it came from people for whom those things mattered personally. The B2B SaaS tradition that replaced it optimizes for the former because that's what closes enterprise deals. This is not malice. It is a profound mismatch between what the tool-builders are measuring and what the people using the tools are actually living.
The consequence of this framing is not just philosophical — it is actively shaping the products. Every AI interface designed under the efficiency logic is answering the same question: what existing task can I make cheaper or faster? The chat box is a support ticket. The AI coding tool is a faster typist. The image generator is a cheaper illustrator. These framings treat AI as a shortcut inside existing workflows, not as a new medium with its own vocabulary and demands. The question Lasseter was asking in 1996 — what new thing can a person do with this that they could not do before? what new medium does this enable? — is not being asked, because it does not close enterprise deals. The result is a genuinely transformative technology being built, sold, and understood primarily as a cost-reduction tool. Which is roughly like Pixar deciding the main value of their renderer was that it made animators faster.
The interface is the most legible symptom of this. The way we interact with AI is through a chat box: text in, text out. The nearest analogy is SMS. This is not an interface metaphor — it is the absence of one. The humanist tradition in computing always cared deeply about interface. Engelbart's 1968 Mother of All Demos was fundamentally about inventing metaphors for thought. The Mac's desktop metaphor was a genuine intellectual achievement: it made the file system legible to non-engineers by grounding it in physical space. HyperCard. VR as Jaron Lanier conceived it — not entertainment but empathy machine, a way to inhabit other perspectives and expand what it means to be a person. These were hard-won insights about how humans think and how technology could meet them where they are. They came from people who read philosophy, who thought about phenomenology and perception, who asked: what is this for, in the deepest sense? The chat interface for AI does not come from that tradition. It comes from: "you know how to text, so text the computer." It is skeuomorphism in the worst sense — dressed in the clothes of text messaging because the people building it never asked whether text messaging was the right metaphor for what this technology actually does.
This is the Syndrome problem. In The Incredibles, the villain's plan is to sell superpowers to everyone — because "when everyone can be super, no one will be." The goal is not to celebrate human capability. The goal is to make human capability irrelevant. The Silicon Valley version: when everyone can generate a painting without learning to paint, the painting is worth nothing, and — more importantly — you have learned nothing. The capacity for judgment, for seeing, for knowing what's good and why, does not develop without practice. Automating out the practice automates out the person.
The process is not a bug. The struggle of learning to draw, to write, to play an instrument — that is not inefficiency waiting to be removed. It is how you become someone with something to say. Philosopher C. Thi Nguyen calls this "striving play" — the idea that we sometimes choose ends specifically because the means are what we actually want. The summit is an excuse to climb. The painting is an excuse to see. Skip the process and you get the output without the person behind it. You get aesthetics without interiority. Gram-worthy, soulless.
Nguyen's work on gamification and value capture is equally relevant here: when you reduce a rich activity to a score or metric, people stop doing the activity and start optimizing the metric. The Instagram camera doesn't just skip the process of learning to see — it installs a new goal (engagement, likes, the gram-worthy image) that actively competes with the old one (making something true). Once the metric is in place, the underlying value gets crowded out.
There is a growing body of research on exactly this — cognitive debt, skill atrophy, the augmentation trap — and the findings are roughly what you'd expect: offload the thinking and the thinking weakens. But the more interesting question is not whether LLMs cause atrophy. It is which skills are being offloaded, and whether those are the ones that mattered.
GUIs made computers accessible to everyone and in doing so made most people unable to use a command line. That is a real loss. It is also, probably, the right tradeoff — most people did not need the command line, and what they gained was worth what they gave up. The question with LLMs is whether the same logic holds. If what gets offloaded is syntax recall, boilerplate, and the mechanical parts of writing — and what remains is design decisions, API and interface choices, abstractions, the judgment calls that actually determine whether something is good — then LLMs could make people better designers in the same way GUIs made computers accessible, while atrophying skills that were always more means than end. You stop thinking about where the semicolons go and start thinking harder about what the system should do and why.
Whether that is actually what happens depends entirely on how you use them. Use them as a crutch and you get the cognitive debt. Use them as a focusing lens — offload the stuff that doesn't require your judgment so your judgment can go where it matters — and you might end up somewhere genuinely interesting. The tool isn't the problem. The orientation toward the tool is.
Agentic AI interfaces push this further in an interesting direction. Complex one-off infrastructure tasks — spinning up a multi-node training cluster, configuring a distributed system, wiring together a pipeline that crosses four different services — used to require either deep specialist knowledge or a specialist to hire. The task was tractable only if you already knew the terrain. Agentic tools make the navigational complexity manageable for someone who knows what they want but not every step required to get there. The human judgment that matters — what am I building, what scale do I need, what tradeoffs am I making — stays with the human. The part that was always just terrain navigation gets handled. This is squarely in the GUI tradition: lower the floor, preserve the decisions that require a person.
The tools Jobs, Lasseter, and Letteri built held to one principle: raise the ceiling on what a human could express while keeping the human as the one doing the expressing. The floor was never technique. It was intent — something to say.
The current generation of AI tools is pitched differently. They are not "here is a more powerful brush." They are "here, let the machine make the creative decisions." The iPhone camera that automatically adjusts your composition, boosts the colors, removes the blemishes — it produces something that looks gram-worthy while systematically removing every decision that would require you to develop taste. Does this save money? Are you more efficient? Does it matter, as long as the output looks good?
Lasseter's framing from 1996 is still the right one. The question is not whether the technology is cheaper or faster. The question is what kind of medium it is, and what it demands of the person using it. Pixar's expensive pencils still required you to be an animator. They raised the ceiling on what an animator could do, but the floor was still: you have to be an animator. The AI camera does not require you to learn to see.
The distinction matters even within AI tools themselves. Gaussian Splats — a new medium for capturing and rendering reality as navigable 3D scenes — use machine learning at their core. The reconstruction is trained. But the human creative experience is not removed because there is "AI" in it. The artist still decides what to capture, how to frame it, what to do with it afterward: relight it, add depth of field, composite 3D objects into it, bend the geometry. When Corridor Crew's Wren used a Gaussian Splat to capture Inspiration Point — a mountain landmark that burned in the LA wildfires — the AI did the reconstruction. The human decided it was worth preserving. That impulse, and everything that flows from it, is irreducibly human. The tool served it. As Wren puts it: "I'm a visual effects artist and my job is to bend reality." The AI is infrastructure. The bending is still his.
Vibe coding — directing an AI to write and iterate on code through natural language — is getting the same dismissal every new medium gets: not real programming, a shortcut that bypasses the work. But the argument applies the wrong floor. The floor of traditional programming was syntax, library APIs, debugging mechanics — terrain navigation. Vibe coding removes that floor the same way GUIs removed the command line. What it does not remove is the judgment that actually makes software good: what are you building and why, what constitutes a correct solution, where are the edge cases that matter, what does acceptable performance look like, when is the AI confidently wrong. These are not syntax problems. They are what separates engineers who understand systems from engineers who can write code. The medium is new. The demands are not gone — they are exposed, because the mechanical layer is no longer there to hide behind. A programmer who cannot articulate what "correct" means, who cannot recognize a plausible-looking solution that fails on the case that matters, who has no taste for what a well-designed system feels like — that programmer is worse off with vibe coding, not better. The one who can do all of that has a new and more powerful pencil.
Critics of AI are mostly reacting against the efficiency discourse — and they are right to. When the loudest voices in tech are promising to eliminate jobs and automate away expertise, the fear is earned. But the critique goes wrong in two ways. The first: it attacks the technology for what the efficiency culture has decided to do with it. AI-as-replacement-engine is a product choice, not a property of the technology — dismissing AI because of how SV pitches it is like dismissing CG animation because Hollywood used it to cut headcount. The second: the argument that AI-generated work is inherently inauthentic — that using AI is cheating, that the human must suffer through the process for the output to count — confuses the tool for the artist. Both errors mistake the medium for the business model of the people currently exploiting it.
Hip-hop built its entire sonic vocabulary on sampling — lifting pieces of existing recordings and recombining them into something new using drum machines and samplers. It was called theft, literally: the copyright lawsuits that followed Grand Upright v. Warner and Campbell v. Acuff-Rose reshaped intellectual property law because the industry could not accept that recombining existing recordings was a creative act. It was. DJ Kool Herc isolating the break in a funk record to loop it indefinitely — using two turntables as an instrument — was not stealing. It was inventing a new medium with two pieces of consumer electronics and a good ear.
The demoscene — programmers and artists making real-time audiovisual art within extreme hardware constraints on early PCs and home computers — was not considered "real" art by the mainstream. It was kids messing around with computers. But the demoscene produced a genuine aesthetic tradition, a competitive community of practice, and a generation of people who understood computation as a creative medium in a way that pure engineers never did. Chiptune artists making music from the sound chips of Game Boys and Commodore 64s were not making real music. Vaporwave — built almost entirely from samples, pitch-shifted and time-stretched, assembled in DAWs by anonymous internet users — was not real music. Flash animation on early internet was not real animation. Autotune, introduced as an imperceptible pitch-correction tool, became an expressive instrument in its own right: T-Pain, Bon Iver, Kanye each made it do things its inventor never imagined, and those are real songs. Every one of these was dismissed as cheating, as not serious, as a shortcut that bypassed the real work. Every one of them produced genuine art made by humans with something to say. The argument "this is cheating" or "this is not real" has been applied to every new computing and electronic tool in the history of making things. It has been wrong every time for the same reason: it mistakes the process for the point, the medium for the message, the tool for the artist.
Gaussian Splats use machine learning. A GarageBand track built loop by loop without touching an instrument is still a song if someone had something to say in making it. Photography was not painting and the critics who said so were wrong. The question is not whether AI was involved. It is whether a human was present as an agent with something to express, making decisions that mattered. That can be true with AI. It can also be absent without it. The hydraulic press that crushed the grand piano contained no AI at all.
Both sides of this argument forget the same thing: culture is always mediated by technology. There has never been a golden age of unmediated human creativity — no pure state of artistic expression that preceded tools. Marshall McLuhan's insight was that the medium is the message: every technology doesn't just transmit expression, it reshapes what can be expressed and thought at all. Writing didn't just record speech — it changed how humans reasoned. The printing press didn't just copy manuscripts — it restructured knowledge and power. The internet didn't just transmit information — it reorganized attention, memory, and identity. DJ Kool Herc's two turntables were not a neutral conduit for pre-existing music; they were a new instrument that produced a new culture. The tools are never neutral. They are never outside the culture. They are part of how the culture thinks.
This means the question is never technology vs. pure human expression. It is always: which technology, designed with what values, serving whose vision? The pro-efficiency builders have answered that question — efficiency, automation, headcount reduction — and called it inevitability. The anti-AI critics have refused to engage with it, by pretending the question doesn't arise if you just reject the tool. But rejecting the tool is not preserving the culture. The culture was always already being shaped by tools. The only real choice is whether you shape the tools back.
The question worth asking of each new tool is still Lasseter's: is this a new expensive pencil in the hands of an artist, or is this the hydraulic press?
There are still companies building cameras that answer this question the right way. Fujifilm is the obvious example. The X-Pro3 has a hidden LCD — it flips inward, invisible by default, so the natural state of the camera is: no screen. You shoot. You don't chimp. You commit to the frame. This is a design opinion so aggressive that reviewers complained about it. Fujifilm shipped it anyway. The GFX 100RF goes further: 102 megapixels of medium format sensor in a rangefinder body with a single fixed lens. No zoom. No interchangeable glass. There's a dedicated toggle that steps through a small set of crop-based focal lengths — discrete, deliberate choices, not a continuous zoom you drift through. And then there are the film simulations — Velvia, Classic Chrome, Acros, Eterna — color science decisions you make before you shoot, the way you used to choose which roll of film to load. Not "capture everything flat and decide in post." Not "let the AI pick a look." You choose, now, before you press the shutter. The camera is opinionated and it wants you to be opinionated too.
This is the exact opposite of the direction Sony, Apple, Google, and Samsung are all moving. Their cameras maximize optionality: shoot RAW, bracket everything, let computational stacking extend the dynamic range, run the subject through a neural network, decide the look later. The implicit philosophy is that creative decisions are a form of risk to be deferred. Fujifilm's implicit philosophy is that creative decisions are the whole point, and deferring them is just procrastination. One approach treats the human as the liability. The other treats the human as the tool's reason for existing.
The early internet humanists — Barlow, Brand, Lanier — were utopians but they were also critics. They held the technology accountable to a standard: does this make people more free? More expressive? More truthful to themselves? Does it expand the range of what a human can be?
That critical tradition is what Silicon Valley abandoned. Not because the people there are bad, but because the culture stopped being accountable to it. The question "is this good for human flourishing?" became "does this have product-market fit?" Technology was supposed to be in service of the human process — the messy, slow, necessary work of people making things and becoming, through that making, more themselves. Now the human process is the inefficiency. The friction. The part being optimized away.
The Crush ad is the monument. The bicycle-of-the-mind company made an ad where the hydraulic press is the hero and the grand piano is the problem being solved.
When nobody in the room said "wait" — that is when you know the process has stopped mattering.