We turn your ideas into math and software
We turn your ideas into math and software

Good Outcomes with “Bad Design”: Rethinking UI in the Age of AI

When crafting user interfaces (UI) for machine learning (ML) systems that tackle complex, multilayered tasks, the outcomes can range from delightfully unexpected to downright perplexing. This became apparent while designing the UI for our ML system, which optimises how to pack furniture onto pallets efficiently.

In a factory producing a diverse array of furniture, an automated palletising system can shave off a huge chunk of rather complex manual work—if designed carefully to serve its purpose. Developing such a system was an unexpectedly challenging endeavour, sparking numerous philosophical discussions and requiring both courage and sisu to break away from some traditional UI design conventions.

The Human Problem

The ML system is equipped with historical data on previous packing efforts, which it uses to form its palletising suggestions. This treasure trove of data is not flawless and doesn’t encompass every possible scenario. Therefore, even though this time-consuming process is mostly automated, some human interaction is necessary.

A significant design challenge was preventing scenarios where users end up manually redoing the task if the system has made a mistake. It’s a common human impulse to micromanage and roll up one’s sleeves to show the newbie (the ML system) how things should be done, but reverting to the old manual ways would defeat the purpose of the entire system. A rather quirky design conundrum.

In the world of UI/UX design, ease of use is paramount; the user needs to be in charge of the process, and following established conventions typically helps users intuitively navigate new systems. However, perhaps clinging to the familiar isn’t always the best approach, since true progress tends to require a departure from the status quo.

Beep Boop, Stand Aside, Human!

We needed the users to take a step back, let go of old habits, and simply guide—rather than dictate—the ML system. Thus, we concocted a system with an unconventional mental model that might initially seem tricky to master. Since this tool is designed for professionals, and mastering it truly pays dividends, we knew this approach was worth pursuing. The overarching aim was to streamline the process, and sometimes, curbing users’ natural behaviour is key to success.

Here’s our approach:

  1. The system makes a palletising suggestion for the user to approve or modify.
  2. The user identifies potential improvements, but here’s the twist: Instead of reverting to repacking the problematic pallets in their old, tried-and-tested manual way, they can now simply suggest more effective packing methods. This is done by crafting example pallet models that suit the occasion, like so:

Crafting a pallet model using a computer-generated pallet suggestion as an example

The machine learning model will immediately take the new models into use without a slow and brittle re-training loop. There is, however, no guarantee that the exact new models will appear as such. This inability to directly influence the outcome might be frustrating for some, but it serves an important purpose. Trust the process, human!

Here’s What It Looks Like in Practice

Imagine having 10 pallets and identifying a packing issue with three that share the same design. You propose an improved design for the machine to learn from and execute the program. The result? All pallets are packed differently! But why, you might wonder. Here lies the wisdom: 

The machine might replicate your suggestion to the letter—or not. Regardless, it performs optimally, leveraging all available data, including your input. More often than not, the machine doesn’t just tweak the problematic pallets; it rethinks the entire packing strategy, applying newfound wisdom across the board. Most people would find this kind of comprehensive overhaul cumbersome and rarely useful enough to undertake themselves, whereas computers—that love crunching numbers—will do it effortlessly and without hesitation.

Can “bad design” lead to good outcomes?

Our experience shows that challenging conventional UI wisdom not only enhances system intelligence but also prompts users to think outside the box, demonstrating that unconventional methods can indeed yield superior results. Artificial intelligence and machine learning are reshaping UI/UX design norms. Our interactions with computers are evolving; we are becoming supervisors of systems rather than dictators of their actions. We have to learn new ways to interact with our future robot overlords, whether it’s through natural language, gestures, or thoughts (and prayers?). What are the best practices in this new landscape, and who are the trailblazers to follow?

Share this article


More posts by Janne Lehtinen

We found our summer trainee

The trainee position for summer 2024 is filled. See you next year? “If you’re into mathematics, algorithms and programming, we might have a perfect internship for you. We expect you to be familiar with

Read More

We teach computers how to read better

In order to process information, computers need it in a structured form. All pieces of data must have strict meaning and format. On the other hand, humans can infer meaning from context and circumstances.

Read More