👋 Nice to meet you!

Christian
Frank

20+ years shaping digital products at global scale — bridging strategy, design, and technology to create experiences users love.

About

Human Centered
UI, UX Design

Christian Frank
Education
High School Diploma Lion-Feuchtwanger Gymnasium, Munich
1983 – 1992
Chemical-Technical Assistant Helmholtz Research Center, Munich
1992 – 1994
Diploma in Interaction Design HfG Schwäbisch Gmünd
1995 – 1999
Career
Consultant Web Projects Cable & Wireless, Munich
1999 – 2001
Online Art Director mps mediaworks, Munich
2001 – 2002
Project Manager Multimedia Munich University of Applied Sciences
2003 – 2004
Web Designer Expedia.de, Munich
2004 – 2007
Senior Interaction Designer Holtzbrinck Digital, Munich
2008 – 2009
Project Manager Web Analytics BMW GROUP, Munich
2010
Usability Engineer BSH Home Appliances, Munich
2010 – 2014
Senior UX Manager BSH Home Appliances, Munich
2014 – 2021
Lecturer, Design Methods HfG Schwäbisch Gmünd
2019 – 2023
Global Product Owner UI BSH Home Appliances, Munich
2021 – now
Skills

What I bring
to the table

Product & UX Leadership

Product & UX Leadership

Extensive experience leading global product and UX initiatives, driving vision and strategy for digital interfaces. As Global Product Owner UI at BSH Home Appliances, successfully coordinated the worldwide introduction of a UI Architecture approach — enhancing consistency and user-friendliness across multiple brands while significantly reducing cost and complexity.

Innovation & Business Strategy

Innovation & Business Strategy

A proven track record in guiding innovation from initial idea to validated business concept. Proficient in coaching multidisciplinary, international teams through Design Thinking and Lean Startup processes — including developing and validating new business models such as digital marketplaces and forward-thinking product concepts.

Product Ownership

Product Ownership

Several years of experience as a Product Owner in agile digital projects, with certifications as a Scrum Master and Kanban System Designer. Adept at creating and managing global product roadmaps and architectures while leading project teams and external service providers — consistently delivering complex projects on schedule and within budget.

User-Centered Research & Analysis

User-Centered Research & Analysis

Deep expertise in grounding product development in a solid understanding of user needs. Built on many years of qualitative market research, usability testing, and customer journey analysis. Experience includes analyzing user behavior to optimize conversion rates for e-commerce platforms like Expedia.de and BMW.

UI & Interaction Design

UI & Interaction Design

Outstanding conceptual skills in the design of interactive and multimodal systems — web, app, and embedded interfaces. Experience extends to gesture and voice control for home appliances. Expert in structuring complex content with appropriate information architecture and developing flexible, brand-neutral operating concepts as a foundation for differentiated user experiences.

Coaching, Mentoring & Education

Coaching, Mentoring & Education

Passionate about empowering others by sharing knowledge and fostering a user-centered mindset. Demonstrated through several years as a university lecturer for Design Methods and Design Management. Responsible for creating and delivering worldwide corporate training on UX, Design Thinking, and Lean Startup.

Stakeholder & Process Management

Stakeholder & Process Management

Excellent skills in creating and presenting compelling decision papers to align stakeholders on key initiatives. Successfully implemented a user-centered innovation and product development process within a large corporate environment — including representing UI design in an accessibility committee to ensure 'Design-for-all' principles.

Artificial Intelligence in Product Development

Artificial Intelligence in Product Development

A strong foundation in AI product management, complemented by hands-on experience applying AI in various stages of the product lifecycle. This includes leveraging AI tools to generate insights from marketing research, as well as to accelerate UI design and prototyping workflows. Experience conceptualizing and integrating AI as a core product feature.

Certifications

Lifelong
Learning

2015 bosch logo

UX Advanced – Research & Testing

Bosch Academy
2016 bosch logo

UX Advanced – Managing UX Projects

Bosch Academy
2018 hpi logo

Design Thinking Coach

HPI School of Design Thinking
2019 berkeley logo

Business Model Innovation

UC Berkeley Executive Education
2020 scrum logo

Professional Scrum Master (PSM I)

Scrum.org
2021 kanban logo

Kanban System Design (KMP I)

Kanban University
2025 udacity logo

AI Product Manager Nanodegree

Udacity
Articles

Thoughts &
Writing

Building UIs with AI

Building UIs with AI and building AI into UIs

To begin, it's crucial to make a distinction between two fundamentally different roles AI plays in this transformation. The first involves AI as a tool within the design process itself, systems that help designers work faster, generate variations, or translate concepts into prototypes. The second is AI as an integral component of the interface, manifesting in features like voice control, image recognition, or predictive behaviors that define how users interact with the final product.

When AI functions as a design tool, it primarily changes workflow and output. Solutions like Lovable, which can generate functional web applications from text prompts, or Figma plugins that automate diagram generation and asset organization, operate behind the scenes. In this case AI acts as an assistant to the designer, accelerating and automating certain tasks while ideally leaving creative direction and strategic thinking to humans.

Tasks that once consumed hours, generating multiple variations of a screen layout or adapting designs for different languages and device sizes, can now happen in moments. Adobe's Firefly handles repetitive work like background removal and content generation from prompts. Uizard can transform hand-drawn sketches into digital designs. Framer creates websites with minimal coding. These capabilities accelerate certain phases of the design process, particularly early exploration and routine modification.

This acceleration prompts an obvious question: if machines handle these mechanical aspects of design, what remains for the designer? Their role shifts partly from executing specific interface solutions to defining the rules, principles, and parameters that guide AI-driven generation. It's a move from designing specific things to designing the systems that create those things.

Although current AI design tools demonstrate real capability, they also (still) reveal limitations. The designs they produce often feel generic, lacking the specific character that comes from deep engagement with a particular problem, special context or user group. They excel at creating plausible interfaces but struggle with the details of usability and brand identity. They can generate polished screens from text descriptions, but achieving the desired result requires significant iterative refinement through follow-up prompts.

At the same time, the unpleasant truth is that designers whose primary function is the creation of standard screen designs are the most vulnerable to displacement. This is because Artificial Intelligence excels at the execution of visual creation by drawing upon vast datasets of existing designs. When a non-designer can use an AI tool to produce a "good enough" visual for a specific need, the value proposition of a designer is potentially undermined. However, this trend did not start with AI tools; it was already a phenomenon observed during the digitization of design. Therefore, the rise of AI might not mean a replacement of the profession, but it forces another necessary (r)evolution by taking away standardized, lower-value tasks.

The recommendation for UI Designers to acquire coding skills, conduct user research and work with data is not a novel concept, but an acceleration of a long-standing best practice. For decades, the most effective designers have cultivated a "T-shaped" skill set, combining deep design expertise with a broad understanding of adjacent fields to improve collaboration and create more viable products. The objective for a designer is not to replace researcher, developers or data analysts but to gain literacy in their domains.

Integrating AI as a direct component of the interface introduces a distinct set of design challenges and possibilities. Voice and conversational interfaces represent one visible manifestation of AI as an interface feature. The interfaces surrounding us, built into cars, appliances, medical devices, and industrial equipment, increasingly rely on AI features as core functionality. When AI enters these spaces, it brings the promise of adaptation and personalization.

In the long run, the process for designing static UI could be replaced by self-optimizing systems, that perpetually refine themselves based on user behavior. At the heart of this evolution lies Reinforcement Learning, that enables systems to learn and adapt through kind of "trial and error". By treating user interactions as a continuous stream of feedback, Reinforcement Learning algorithms could automate the optimization of a UI.

The most critical point in creating a self-optimizing UI is defining the optimization target (reward signal). The AI agent will relentlessly optimize for whatever it is told to maximize, so a poorly defined objective can lead to unintended user experiences. The key is to translate abstract, qualitative goals like user trust and delight into quantifiable metrics and then link them to concrete business objectives.

Looking forward, the boundaries between design and development tools will keep blurring as AI assistants generate increasingly functional prototypes and working code. For embedded systems specifically, the toolchain will hopefully become more integrated, with design tools that understand embedded constraints and can generate code appropriate for resource-limited environments.

Accessibility at Home

Accessibility at Home: The promise and peril of digital User Interfaces

The home appliance industry is experiencing a underlying friction. Digital technologies are simultaneously expanding accessibility for many users while creating new barriers for others. This creates a major challenge for manufacturers trying to create products that the widest possible range of people can use.

We are living longer, and the world is aging with us. By 2030, 1.4 billion people globally will be over sixty years old, representing the primary appliance-purchasing demographic. This demographic shift forces a critical look at the environments we depend on most: our homes. The greatest challenges aren't always structural. Often, they are found in the small, daily interactions with the objects we rely on, the microwave, the thermostat, the washing machine. The future of independent living will be decided not just by the architecture of our houses, but by the thoughtfulness of the interfaces we are using on a daily basis.

Nearly a quarter of people aged 16 years or over in the EU had a disability in 2024. Manufacturers who design products assuming perfect vision, strong fine motor control, and high digital literacy systematically exclude substantial portions of their addressable market. At the same time it is still a common misconception that accessible design is at odds with aesthetic appeal. The belief is that making something usable for everyone means compromising on a clean, modern look, resulting in big, clunky buttons and stigmatizing "special" features. Ultimately, the tension is real, but the supposed conflict is a misunderstanding.

A fascinating phenomenon in inclusive design is the "curb cut effect", named for sidewalk ramps originally created for wheelchair users but now benefiting parents with strollers, travelers with rolling luggage, cyclists, delivery workers, and others. Similarly, accessibility features designed for specific disabilities often improve usability for everyone. Large, well-separated touch targets help not only users with motor control challenges but anyone operating devices quickly or in stressful situations. Clear, uncluttered interfaces assist not only people with cognitive limitations but reduce errors for all users. Voice control benefits not only those with vision impairments but anyone cooking with messy hands.

A principle of inclusive UI design is offering multiple pathways for interaction rather than forcing all users through a single mode. This multimodal approach recognizes that what enables one person may disable another. Physical buttons and dials provide tactile feedback and enable operation without visual confirmation. Touchscreens with customizable design accommodate visually oriented users. Voice control eliminates requirements for fine motor skills and physical reach. App-based controls enable remote operation and extended functionality for users comfortable with smartphones.

The critical insight is that these modes should coexist as options rather than one replacing others. A person with a speech impairment cannot use voice-only controls. Someone with vision loss needs tactile or auditory feedback.

Digital functionalities create both opportunities and challenges for inclusive design. Where analog appliances typically offered a handful of clearly defined functions, modern digital interfaces commonly feature a multitude of cluttered/nested features. This "feature creep" happens because a product is often marketed on its versatility. Good design addresses this by defaulting to simplicity while making advanced features available to those who want them.

Information provided through multiple sensory channels simultaneously ensures it reaches users regardless of sensory limitations. A cycle completion alert can be communicated through an on-screen notification, audible chime, haptic vibration or smartphone message. This redundancy serves deaf users who rely on visual or tactile feedback, blind users who depend on auditory or haptic signals, and anyone in environments where one communication channel might be inaccessible.

Customizability extends beyond interaction modes. Font sizes and contrast levels can be individually configured for users with reduced eyesight. Touch sensitivity can adapt to accommodate tremor or reduced fine motor control. Interface complexity can scale from simplified symbol sets to detailed text labels based on cognitive preferences. These adaptations transform a single product into something that serves different user needs without requiring separate "special versions" that can stigmatize and marginalize.

AI applications are beginning to expand inclusive design possibilities in new ways. Automatic fabric detection in washing machines compensates for uncertainty about program selection. Food recognition systems in ovens suggest optimal cooking parameters. Inventory tracking in refrigerators assists people experiencing memory challenges. When implemented thoughtfully, these capabilities can compensate for various cognitive challenges while remaining useful conveniences for all users.

Achieving genuinely inclusive design also requires inclusive development processes. Diverse user groups must be involved throughout product development rather than treating accessibility as a final validation step. Testing exclusively with young, technically proficient, non-disabled users produces designs that work well for that demographic while creating barriers for others. Including participants with vision impairments, motor limitations, cognitive differences and varying age groups in iterative testing cycles throughout development reveals usability issues that design teams might overlook otherwise.

Distributed UIs

Distributed UIs: Designing User-Friendly Interfaces for Smart Kitchens

Distributed User Interfaces (DUIs) describe a design concept where a user interface is not confined to a single device but is spread across multiple devices and interaction modalities. This approach reflects the natural, multi-device behavior of people in technology-rich environments. Instead of centralizing all interaction on one screen, DUIs place information and controls where they are most effective for a given situation. This is particularly relevant in the home appliance sector, as everyday activities like cooking or cleaning naturally involve multiple tasks and devices.

In many consumer contexts, frequently switching between smartphones, wearables, and embedded displays increases cognitive load and can disrupt the flow of a task. DUI concepts aim to address this issue by creating a coherent interaction space that spans all devices. A well-executed DUI design allows users to access relevant information contextually at the right place and time. However the benefits of DUIs depend on good, intuitive design. If users do not understand how devices are connected or where to find certain controls, these distributed systems can quickly become confusing.

The kitchen is an ideal application area for DUIs in the home appliance domain. Cooking is a complex activity involving time pressure, physical work, and constant context shifts. People often prepare meals while monitoring several appliances, following a recipe, and coordinating the timing of different dishes. Traditional interfaces, like small appliance screens or printed recipes, often fail to support this multitasking environment effectively. Distributed UIs can improve this experience by assigning tasks to the most suitable modality or display.

Hands-free interaction, especially through voice, offers a significant benefit when hands are occupied or messy, which is a common scenario during cooking. Furthermore, wearables or ambient displays let users check appliance status without walking across the room. DUIs can also support collaborative cooking, where multiple users need to access the same information. These benefits can only be realized, when the system behaves predictably and reduces effort rather than adding complexity.

Despite their potential, DUIs come with significant challenges. One of the biggest is complexity, from both a technical and a user experience (UX) perspective. Keeping multiple devices synchronized and consistent requires a robust system architecture. From the user's point of view, the interaction logic must be clear, allowing them to form a solid mental model of how the system works. If an action on one device causes an unexpected result on another, user trust is quickly lost.

Interoperability is another major hurdle, as most households contain devices from various manufacturers that do not use common standards, which limits the practical rollout of fully integrated systems.

Multimodal user interfaces play a key role in making DUIs effective. By combining input and output channels, such as speech/voice control (LLM), touch, gesture, and audio-visual feedback, they make the system more robust, as the strengths of one modality can compensate for the weaknesses of another. In a cooking context, voice commands can be paired with visual feedback for confirmation, while touch input allows for more precise control when needed.

Artificial intelligence takes the potential of distributed interaction a step further by enabling systems to understand context, predict user intentions, and adapt interfaces to individual needs and preferences. For example, an intelligent kitchen could identify the dish being prepared and automatically coordinate oven settings, timers, and reminders. Users appreciate this kind of assistance as long as it really supports them in getting their job done. Unwanted or unexpected AI behavior can quickly cause users to lose trust and abandon the system.

Technical complexity, usability issues, and poor/inconsistent design execution remain critical obstacles. Ultimately, the success of future domestic interaction systems will be defined not only by their technological novelty, but by how carefully and thoughtfully they are designed around the user.

Beyond buttons and screens

Beyond buttons, knobs and touchscreens: The challenge of designing User Interfaces for our future homes

As UI/UX designers, we are paid to obsess over friction and cognitive load. We map out user journeys to smooth over the disjointed experience of setting a dishwasher, and we lament the cryptic symbols on an oven dial. For years, our work has been defined by the boundaries of buttons and glass screens.

But the next great frontier in user experience won't be on a screen. The integration of AI and robotics into the home is set to dissolve the traditional interface in the future, forcing us to design for conversations, gestures, and intent.

This evolution promises to shift the interface from the screen to the space itself. We're moving toward a world where a user's natural language is the primary input. The design challenge isn't necessarily about creating a better menu tree for a smart oven; it's about designing a conversation that can understand the nuance between "bake the chicken until it's cooked" and "roast the chicken until the skin is crispy." Similarly, kinetic interactions — like a simple hand gesture to adjust a stove's flame — aim to create an experience that mirrors our natural behaviors, reducing the mental effort of translating what we want into a button press.

However, we know that the most advanced solution is rarely the most human-centric one. This is the critical challenge: we are all conservative users to a certain degree, especially when it comes to our homes. There is a deep, ingrained trust in the familiar. The humble knob on a stove offers a satisfying, tactile certainty and it is an efficient operation. It never misunderstands you. It never needs a software update. It just works.

This is the wall of habit and reliability that our futuristic designs will run up against. A user might tolerate a buggy social media app, but they will have zero patience for a robotic arm that misinterprets a gesture and ruins dinner by its presence. The "wow" factor of a gesture-controlled kitchen will fade instantly if it's less reliable than the 15-year-old appliance it replaced.

Our role as UI/UX designers is shifting from architecting on-screen journeys to choreographing the interaction between human and machine (HMIs). The new frontier is "shared autonomy", and our primary job is to design the hand-off. How do we create an interface that allows a user to delegate a task to a robot but feel confident they can seamlessly intervene at any moment? This isn't just about an "emergency stop" button; it's about designing a graceful "off-ramp" in the user experience, ensuring the user always feels like the chef, not just a spectator.

The future of home UI/UX isn't about eliminating interfaces, but about making them invisible, predictive, and deeply respectful of the user's established habits. It's about building trust through flawless execution and providing transparent control that honors the comfort of the familiar.

So, as we stand on this precipice, the critical question for us isn't just "what can the technology do?" but "how do we design it so people will actually want to buy and use it?"

Contact

Let's make
something great