Building UIs with AI and building AI into UIs
To begin, it's crucial to make a distinction between two fundamentally different roles AI plays in this transformation. The first involves AI as a tool within the design process itself, systems that help designers work faster, generate variations, or translate concepts into prototypes. The second is AI as an integral component of the interface, manifesting in features like voice control, image recognition, or predictive behaviors that define how users interact with the final product.
When AI functions as a design tool, it primarily changes workflow and output. Solutions like Lovable, which can generate functional web applications from text prompts, or Figma plugins that automate diagram generation and asset organization, operate behind the scenes. In this case AI acts as an assistant to the designer, accelerating and automating certain tasks while ideally leaving creative direction and strategic thinking to humans.
Tasks that once consumed hours, generating multiple variations of a screen layout or adapting designs for different languages and device sizes, can now happen in moments. Adobe's Firefly handles repetitive work like background removal and content generation from prompts. Uizard can transform hand-drawn sketches into digital designs. Framer creates websites with minimal coding. These capabilities accelerate certain phases of the design process, particularly early exploration and routine modification.
This acceleration prompts an obvious question: if machines handle these mechanical aspects of design, what remains for the designer? Their role shifts partly from executing specific interface solutions to defining the rules, principles, and parameters that guide AI-driven generation. It's a move from designing specific things to designing the systems that create those things.
Although current AI design tools demonstrate real capability, they also (still) reveal limitations. The designs they produce often feel generic, lacking the specific character that comes from deep engagement with a particular problem, special context or user group. They excel at creating plausible interfaces but struggle with the details of usability and brand identity. They can generate polished screens from text descriptions, but achieving the desired result requires significant iterative refinement through follow-up prompts.
At the same time, the unpleasant truth is that designers whose primary function is the creation of standard screen designs are the most vulnerable to displacement. This is because Artificial Intelligence excels at the execution of visual creation by drawing upon vast datasets of existing designs. When a non-designer can use an AI tool to produce a "good enough" visual for a specific need, the value proposition of a designer is potentially undermined. However, this trend did not start with AI tools; it was already a phenomenon observed during the digitization of design. Therefore, the rise of AI might not mean a replacement of the profession, but it forces another necessary (r)evolution by taking away standardized, lower-value tasks.
The recommendation for UI Designers to acquire coding skills, conduct user research and work with data is not a novel concept, but an acceleration of a long-standing best practice. For decades, the most effective designers have cultivated a "T-shaped" skill set, combining deep design expertise with a broad understanding of adjacent fields to improve collaboration and create more viable products. The objective for a designer is not to replace researcher, developers or data analysts but to gain literacy in their domains.
Integrating AI as a direct component of the interface introduces a distinct set of design challenges and possibilities. Voice and conversational interfaces represent one visible manifestation of AI as an interface feature. The interfaces surrounding us, built into cars, appliances, medical devices, and industrial equipment, increasingly rely on AI features as core functionality. When AI enters these spaces, it brings the promise of adaptation and personalization.
In the long run, the process for designing static UI could be replaced by self-optimizing systems, that perpetually refine themselves based on user behavior. At the heart of this evolution lies Reinforcement Learning, that enables systems to learn and adapt through kind of "trial and error". By treating user interactions as a continuous stream of feedback, Reinforcement Learning algorithms could automate the optimization of a UI.
The most critical point in creating a self-optimizing UI is defining the optimization target (reward signal). The AI agent will relentlessly optimize for whatever it is told to maximize, so a poorly defined objective can lead to unintended user experiences. The key is to translate abstract, qualitative goals like user trust and delight into quantifiable metrics and then link them to concrete business objectives.
Looking forward, the boundaries between design and development tools will keep blurring as AI assistants generate increasingly functional prototypes and working code. For embedded systems specifically, the toolchain will hopefully become more integrated, with design tools that understand embedded constraints and can generate code appropriate for resource-limited environments.