The future of UI design for embedded systems

It’s time for embedded user interfaces to catch up to the rest of the world.

We know this already, as you can’t read an article or go to a conference without someone saying “our users want their smartphone experience in their car and on their fridge!” The allure of blending powerful features and beautiful user experiences (UX) comes too easily, and embedded systems are found lacking.

So why are we falling behind and what market forces are driving us to catch up?

With less power comes more responsibility

There’s a common belief in the sliding scale of hardware-to-UI-capability ratio, in the sense that the smaller the platform, the less you can do with the UI. While true in a strictly functional sense—if you don’t have 3D hardware acceleration, you can’t use 3D hardware acceleration—it’s often the case that the application architecture, graphics library limitations, and suboptimal performance result in UIs that look like this:

Example of old UI design

Instead of this:

Example of good UI design

We know this has to do with not having enough development time and resources to squeeze as much out of the hardware as possible but users are left scratching their heads as to why its easier to operate a smartphone that does fifty things than a dishwasher that does only one thing.

It certainly doesn’t have to be this way. Not only are CPU and GPU hardware capabilities increasing, right down to the MPU, UI toolkits are getting better. Check out the recently announced ARM Dynamiq processors and Mali-G72 GPU, or the first MCU with an integrated 2D GPU and integrated DDR2 memory from Microchip Technology Inc. We’re being offered more speed and capability than ever before and with less power consumption and heat.

And it will only get better. Embedded processors have traditionally been slow to adopt cutting-edge components, being more concerned with power, heat, and longer in-service times. But with the Internet of Things, smart homes, connected cars, and a host of other trends that have become reality, the demand for faster, more capable devices by both consumers and industry is just getting started.

So, what can embedded UI teams look forward to in 3 to 5 years? A look at the forces transforming the technology world right now provides the answers.

UI design is equal to code

The 2017 Design in Tech Report, presented at SXSW Interactive, offers an analysis of design trends changing the face of tech. Three points stand out as particularly relevant for embedded systems, by the very fact that users are demanding more out of their UIs and that more customers are on their way. The IoT market alone is expected to grow to 75.4 billion devices by 2025.

Perhaps the biggest change is that designers, or at least the unique skillset of UI design, is becoming a priority for development teams. Developers know how to create, test, and optimize apps, but it’s the lack of a designer’s knowledge that results in so many of the “traditional-looking” UIs that are still being churned out by embedded teams.

Only by making the job of the designer equal to that of the developer can products be made beautiful, useful, engaging, and functional—qualities that drive adoption and brand loyalty. The report points out that not only have Facebook, Amazon, and Google grown their art and design personnel by 65 percent in the past year, two out of every five designers are now involved in code development.

We’re seeing this with embedded software teams as well, as bringing a designer’s mindset into the developer’s machine is the surest way to achieve UX and commercial success.

UI design must be inclusive

“Adopting an inclusive design approach expands a tech product’s total addressable market.”
2017 Design in Tech Report

The report observes that, historically, technology products weren’t designed for a broad range of users, rather they were built to be used by the people who created them. This exclusivity has softened over time, of course, with a whole industry devoted to user research and feedback, but again, smartphones continue to move the UX goalpost for embedded. Why can’t every device operate with just one button, a simplified set of graphics, and be understood by everyone around the world?

Users don’t care that your graphics library doesn’t support pixel shaders or hardware acceleration. Embedded systems are highly technical but cannot survive by serving only a technical audience, especially as they sell into newer consumer-dominated markets all over the world. UIs must be thought out, designed, and built for not only maximum usability but inclusivity as well—whether it’s geographic, cultural, or socioeconomic—to satisfy global user expectations.

Although originally intended for the web, the World Wide Web Consortium’s guidelines for accessibility, usability, and inclusion apply equally to embedded systems because, at the end of the day, all our users are human.

“Inclusive design, universal design, and design for all involves designing products, such as websites, to be usable by everyone to the greatest extent possible, without the need for adaptation. Inclusion addresses a broad range of issues including access to and quality of hardware, software, and Internet connectivity; computer literacy and skills; economic situation; education; geographic location; and language — as well as age and disability.”

One major example of a company tackling this head on is Microsoft, check out their inclusive design toolkit.

The key to inclusivity is designing UI elements that are simple and unambiguous while eliciting an emotional connection. In this way, your UI is less frustrating to use and more likely to be used.

To design embedded UIs that include, rather than alienate, as broad an audience as possible, it’s the designer to the rescue. Unlike developers, they have the specific techniques and experience with distilling diverse personas into a usable design, so your product ends up with better UX/UI that also happens to be easy to use by everyone.

The need for supporting this diversity has found its way into the makeup of design teams, as the Design in Tech Report shows:

UI design unlocks value

“Design tool companies and design community platforms occupy new positions of value for tech.”
2017 Design in Tech Report

Software like Adobe Photoshop, GIMP, and Sketch are just a few of the many examples of design tools that have democratized graphic design, lowering the barrier to UI excellence, and reducing time to release (does anyone do back-of-the-napkin sketches anymore?).

The same applies to embedded UI tools. Our developers have mostly moved away from using graphics primitives to use mid-level libraries, such as OpenGL and Direct3D, with higher-level platforms such as Qt gaining popularity. With these pre-packaged, easier to use toolkits, building UIs has also become democratized. Anyone can take a Qt widget and run with it.

The problem is, to unlock the maximum value out of all that design effort, you need to go one step further and not get locked into how pre-packaged libraries look and behave.

“Every dollar invested in UX brings 100 dollars in return. That’s an ROI of a whopping 9900 percent.”
Good UX is Good Business: How to Reap Its Benefits, Forbes

Going straight from the designer’s tool to deployment, with little to no modification of the design, is the only way to ensure your product offers the best UX/UI. Otherwise, development tradeoffs (software limitations, pre-canned widgets, resource constraints, power, time, etc.) will chip away at your optimal experience.

Storyboard Suite is one such tool, with built-in parallel workflows between the designer and developer to reduce the common frustrations and friction between them. By working side-by-side, the designer can focus on creating the best UX/UI possible, the developer can focus on the back-end, and the whole application is created, tested, and deployed within one single environment. The app also happens to run on a highly-optimized engine tuned for a variety of platforms and architectures.

This video from last year’s Apple WWDC event sums up why smart design matters:

“When you put so much hard work and creativity into an app, you really want it to reach people. But ultimately, everyone will experience your app in their own way and if you think about the millions of people who could use your app, you have to consider how diverse their perspectives are.”

The key statement here is that we all want our products to reach people, in terms of both usability and market share. Using collaborative tools that make design equal to code is just the way to do it.

To see how easy it is to put design into action, try the sample applications included in our free 30-day Storyboard Suite trial:

The perks of developing in Storyboard

The following is a post from Vanessa, one of our multi-talented and insanely busy in-house Storyboard application developers. Vanessa uses Storyboard Engine, Storyboard IO, Adobe Photoshop, and many other tools to bring custom UI applications to life for many of our major customers.

User interface (UI) development can be tricky, even more so when you’re building apps for embedded devices like kitchen appliances and vending machines. As an application developer with a background in interactive media design, I’m constantly amazed at how much we can create based on such little information. It seems that every project starts with a rough sketch with many holes to fill, and somehow we get amazing experiences delivered on time.

I’ve been using Storyboard Suite for over two years and it’s pretty incredible what it can do. Compared to other design tools I’ve used, I feel really spoiled with how easy Storyboard makes things.

Here are some of the things it does that make my life easier and will hopefully make your life easier too.

Develop without knowing the details

With Storyboard, I can develop the application without knowing all the details, big or small. I’ve worked on customer projects where we only had a rough outline of the UI to work with. With a basic understanding of what the application needed, I started developing the preliminary navigation, followed by the backend functionality. Because Storyboard keeps the UI separate from the backend, it was easy to abstract the development of these key pieces. Then, once the design decisions were made, I added the graphics and interactions to complete the application.

All things considered, it’s pretty hard to be stalled when using Storyboard, which helps get products out the door, and into the hands of users quicker — without sacrificing quality or performance.

Develop without knowing the platform

Storyboard UI development is totally platform independent, letting you develop the app you want without knowing what target you are developing for. You can test the communication with the backend using Storyboard’s iogen and iorcv tools, or use a compiled C program, and then transfer your UI files onto your target via USB once you decide on specific hardware.

Once again, there’s no reason to wait to start creating your UI. This reduces the delays and holdups that tend to happen with traditional UI development.

Easy to test

Maybe worse than not deciding on a platform is not having any hardware at all! But with Storyboard, it’s easy to test your app with the desktop ‘simulator’, which isn’t actually a simulator, but our actual Storyboard Engine runtime, just designed for the operating system and graphics renderer your computer has. So, what you see on the desktop is exactly what you’ll get on whichever runtime you go with.

Easy to change and iterate

We all love change, right? Well, with Storyboard design and code changes are easy to do for a number of reasons. First, you can easily make changes to the design of the application without having to search through code to find where that piece of the app lies, especially important when you’re collaborating and someone else could have written it.

Second, having a visual, drag-and-drop tool that automatically updates everything underneath lets you easily picture and modify what the UI will look like without much stress. Instead of designing with code, running the application, navigating to the part of the application you’ve changed, and then deciding, “No, I don’t like it there,” and repeating…It’s much easier to scroll down in the Storyboard Designer editor view to the screen you want to edit, position the element where you want it, and see instantly whether it looks good or not.

Third, it allows designers to frequently test out the UI design and interaction to make sure their vision is coming to fruition, either on the simulator or on any number of real targets. This allows designers and developers to continuously evaluate together and create the best possible version of an application.

Easily run your application on a real target

No part of the UI application built in Storyboard Designer needs to be compiled to run on a target. This means I don’t need an environment setup to allow me to compile, making transferring and testing on a target pretty painless. All I need to do is export an application file, which creates an XML model file to be interpreted by Storyboard Engine, then transfer it to the target and run sbengine and point it to that model file.

In Storyboard 5.0, we introduced the SCP transfer feature which has made this process, and my life, so much easier! No more adding files to USB, transferring USB to the target, making sure it mounts properly, moving files via the command line, and then restarting the board, when needed.

Now, all of this can be done through a network connection, with files copied directly down to the target from within Storyboard Designer, and you can also choose to run a script to start your app, or do anything you want really, right after the transfer is complete. Just another reason why developing Storyboard Suite is so great.

To try Storyboard Suite for yourself (and feel like an application development goddess!), start your free trial now and see how easy it is to create beautiful embedded UIs from concept to production.

StoryboardTrialBtn

Supporting UI scalability – from MCU to MPU

We are often asked the question, “what platforms do you support?” or variations on the same that include operating systems, GPUs, rendering technologies, and more. Development teams want to know if Storyboard Suite supports the environments they have, or want to build out.

We always give the same answer: All of them.

Of course, the details are important, so this blog explains exactly what Storyboard Suite supports.

Support across the boards

Storyboard supports a wide range of ARM architectures, from the low-power Cortex-M series used on MCUs to the powerful Cortex-A series used on MPUs. We’ve also supported PowerPC, x86, and SH4 architectures, among others. Storyboard runs on everything from task-based RTOS such as Micrium µC/OS-II and FreeRTOS to full-featured, general purpose RTOS such as Linux, QNX Neutrino, and Green Hills INTEGRITY.

This means you can build user interfaces (UI) for everything from tiny industrial controls to complex automotive instrument panels without having to worry about the particulars of the platform or the effort required to port to different environments when your product line grows.

Here are examples of our demo images running on the STM32F4 (Cortex-M4) and the NXP i.MX 6QuadPlus (Cortex-A9):

STM32F4 (Cortex-M4)

NXP i.MX 6QuadPlus (Cortex-A9)

We support this wide range through extensive development and testing on each platform, ensuring the Storyboard Engine takes advantage of specific board features, such as proprietary graphics libraries, 2D or 3D hardware acceleration, and memory management functions.

The ease at which we provide support for multiple platforms is a result of the way Storyboard is architected: it’s purpose-built for maximum performance and customizability. This flexibility is also exposed to the user, allowing fine control over application configuration and performance:

Application size – The Storyboard Engine is based on a modular plugin system, such that only the required components are included in the system configuration and deployed. Not only does this reduce the application’s footprint, it allows the user to scale functionality down for systems with minimal memory and CPU resources, including:

  • Screen transitions
  • Animations
  • 3D models
  • Input methods
  • Rendering types (polygons, circles, raw canvas, etc.)
  • Lua scripting

Scalability across platforms – Storyboard uses a fixed data model to represent the UI, not generated code, which means the same model can be used across multiple platform-specific runtimes that are tuned for the features and hardware optimizations of the platform. The model itself is event-driven and uses a well-defined API between the UI and the event/messaging system native to the deployment platform, meaning it isn’t tied to a specific environment.

Graphics rendering – Storyboard lets you choose how you want to render graphics, from writing directly into the frame buffer to taking advantage of any 2D or 3D hardware rendering available on the board.

Flexible filesystem – The standard Storyboard deployment uses a filesystem to store the UI data model and associated resources but, if you don’t have a filesystem, you can compile and bind resources directly to Storyboard Engine and flash to the target system.

Font usage – A Storyboard application can either use a font engine to load fonts directly from TrueType or OpenType files, or use pre-rendered glyphs that are stored in ROM.

To try out Storyboard on different platforms for yourself, download and run our ready-to-go demo images.

 

Storyboard Suite vs. Qt, part 2: Which is better for embedded?

Storyboard & Qt comparison 2

(read part 1 here)

Last time we compared Qt and Storyboard Suite across three criteria: designer workflow, resource usage and performance, and platform scalability. In this second and final post, we’ll cover:

  1. Application architecture
  2. Testing your user interface (UI)
  3. Time to experience and overall cost

It’s important to remember that Qt is an excellent framework that performs well across a wide variety of applications – there’s a reason why it’s so popular. However, we’re just focusing on embedded devices here and not making any claims that Storyboard is better across all types of apps.

Application architecture

By default, a Storyboard application enforces a clean separation between the user interface layer and the backend, with a well-defined data model and events API in between. The data model is generated from the UI design and is used by the Storyboard engine at runtime. This architecture leads to a robust, portable, and highly optimized application that also lets the designer and developer work in parallel to build the app. They can literally work on the same application at the same time because they’re working on two self-contained components.

This separation results in faster testing and iteration of both sides of the application and allows the designer to change the UI – say, after design reviews or user testing – with little to no impact on the code underneath. Photoshop Re-import Design IterationWe’ll discuss the benefits of this down below.

By design, Qt allows you to create your own application architecture in essentially one of two ways. Using standard Qt distributions (i.e., not Qt Lite), you get one monolithic application that has the UI and underlying libraries tightly coupled together. You can enforce your own clean separation between the UI layer and backend but, in our experience, we find that most teams can not or will not invest the time to do it. They’d rather get their UI up and running quickly than focus on defining and implementing best practices for architecture and maintenance.

“You may want to think about separating the presentation from the business logic to make it easier (or at least possible) to move the code, say, from the desktop to a touchscreen interface (which might mean moving from widgets to QML).”
– Top 10 Scary Qt Mistakes, ICS Insight blog

The second option that Qt has recently announced is Qt Lite, a trimmed-down version of Qt that changes the include philosophy from “everything included” to starting with the minimum deployable configuration and letting you add additional modules. Qt Lite comes with a new configuration system that lets you declare the content you want from each module and includes a new, software-based Qt Quick 2D Renderer as an alternative to relying on hardware that supports OpenGL. While this customizability is powerful, it still takes knowledge and time to act on it, and the framework currently supports only ARM Cortex-A SoCs, with Cortex-M7 MCUs being considered for the future.

Testing your UI

There are a few methods for testing Qt applications from a UI perspective:

  • Qt Test – Qt’s built-in testing framework, including basic GUI testing for mouse and keyboard simulation. Qt Creator includes an autotest feature that builds on Qt Test.
  • Qt Quick Test – Qt’s built-in testing framework for QML applications.
  • Build your own tests or use any number of C++-based test tools, such as Google Test or Squish by Froglogic.

Storyboard Suite offers two ways to support your application testing: tools built into the product and APIs that expose methods and data to include in your own test framework. For example:

  • Testing the UI: Storyboard allows you to capture and serialize the complete event stream between the UI and runtime, letting you dump data to the terminal or use the included sample application to play back the stream. You can also inject events into the application via the iogen tool. These features allow you to run functional, performance, and memory tests across a variety of inputs and outputs.
  • Identify UI changes: Storyboard provides a screen capture plugin, letting you perform pixel-based comparisons between different versions of the UI.
  • Deep performance testing: The Storyboard logger plugin lets you capture various application performance metrics, such as redraw times, action execution times, and event processing times.

These tools (and more) are useful for developers but what about designers?

To review, validate, or simply show off, designers can use the built-in simulator within Storyboard (which actually runs a version of the Storyboard runtime built for the host computer) or quickly run on real hardware in just a few clicks. Storyboard_Suite_DesignerWith Storyboard 5.0, there are additional options to execute applications direct-to-target via SCP or as a standalone Microsoft Windows executable. And since there’s no cross-compilation with Storyboard, you can run it on multiple target types, such as Android or iOS mobile devices, if you want to pass your designs around the team for feedback.

Speaking of which…

Time to experience and overall cost

We coined the phrase “time to experience” to mean the time it takes to get an application from design to runtime, letting the designer show off and validate their creation with stakeholders. With Qt, the designer must work with a developer to realize their design in code and deploy it to the target hardware. This usually involves back and forth between the two as the design has to be tweaked to fit into development constraints and it definitely includes all the hiccups of debugging and running a real application.

Qt has recently released a beta version of a Photoshop customization and synchronization feature but it looks like it’s limited to Qt Quick Controls and not the entire UI.

With Storyboard, time to experience is much faster because a developer isn’t actually needed – the designer simply imports their design from Photoshop, adds any desired animations and behavior, and runs on either the built-in simulator or real hardware. This video shows how you can take an existing design and deploy it on an NXP i.MX board in less than 12 minutes…real time!

Storyboard also allows you to tweak some or all of the UI design and re-import the changes for immediate validation, as in this example of two different versions of the same UI (click to enlarge):

Photoshop reimport

What about time to release?

Comparing debugging, deployment, and testing times between Storyboard and Qt is a challenge, as it depends on the application being built, the experience of the people building it, etc. It also depends on what is being built, as Qt includes APIs for Wi-Fi, Bluetooth, database integration, and more, while Storyboard is purpose-built for UI development. When we focus strictly on UIs, two things we can compare are the iterations and handoffs that occur between design and development.

Typically, and one can argue this happens 100 percent of the time, designs are handed off to developers and UI/UX practitioners don’t see it again until later on, sometimes waiting until alpha or beta product testing. This siloed approach, in which there is a complete loss of design control, creates development delays as the designer tries to adapt UI features for a product that’s getting closer and closer to release. It would be fair to say that the developer wields a lot of power here, as release timelines get closer and “good design” is sacrificed for “code complete.”

Traditional vs. Storyboard workflows

Comparison between workflows (click to enlarge)

With Qt, there are two implications. First, UI elements will look like any other Qt application, unless additional time is spent to customize the look and behavior. Second, delays are introduced into the timeline as designers and developers go back and forth to achieve a compromise between design intent and what the code can actually do. Try imagining how long it would take to transform a complete Photoshop layout into a C++ Qt app.

With Storyboard, the designer and developer work in parallel within the same environment right from the start and, most importantly, the final UI reflects exactly what the designer intended, in look and behavior. You don’t have to imagine transforming a Photoshop layout into a runtime application, Storyboard does it for you. There are no back and forth delays and, with the other tools that Storyboard provides, such as Photoshop re-import, design compare and merge, the animation timeline, and simulator testing to name a few, every step of the design to development process is streamlined.

“We had three months to deliver on our technology concept car for the Consumer Electronics Show, and choosing Crank’s Storyboard Suite to develop many of our UI components proved to be an excellent decision”
– Andy Gryc, automotive product marketing manager, QNX Software Systems Limited, a subsidiary of BlackBerry

As far as cost is concerned, it’s important to consider that while Qt for Application Development has both free and commercially-licensed versions, the embedded device version, Qt for Device Creation, is available only under commercial license. You can see a license comparison and pricing of Qt options here.

For both Qt and Storyboard, the actual cost to you depends on the type of license and number of development seats. The overall project cost, however, is dependent on how many cycles of design, development, and testing you go through to release your product.

The choice is yours

Between this post and the last, we hope you have enough information to decide which UI development framework is right for you. If you’ve taken the time to read them, great, otherwise, the tl;dr is:

  1. To rapidly create lean, high-performance embedded UI applications that match designer intent for aesthetics and user experience, across a wider variety of platforms, choose Storyboard Suite.
  2. To create embedded UI applications with a pre-configured software stack that reduces work for features beyond the UI, choose Qt for Device Creation.
  3. To get the best of both worlds, check out how to integrate Storyboard Suite and Qt.

Eval

5 capabilities that entrench embedded UI designers in the development process

The following is a post from Fancy Dan, one of our creative and talented in-house designers. Dan uses Storyboard Suite on a daily basis to create awesome user experiences for embedded applications. I asked him to share the top 5 features that make his job easier and his designs awesome.

Storyboard Suite top features

Dan’s top 5 list

  1. Integrate content from Adobe Photoshop » Storyboard brings a .psd file to life
  2. Add and refine motion easily » Easy to use, but do a lot
  3. Layout and position content precisely » Subtle features that go a long way
  4. Enhance user experience with actions » The combination of trigger events, actions, and parameters
  5. Make it real. Touch the design. » From simulator to target

Integrate content from Adobe Photoshop

As a designer, I find working in Storyboard to be a great experience, and a major contributing factor for this is the Photoshop integration. Photoshop has been around for decades, and it plays nicely with Adobe Illustrator, two tools that I use to design and plan a UI. Many of the design-based decisions I make in Photoshop, such as organization, naming, and planning, are brought forward to Storyboard when I import my .psd. When I start working in Storyboard, I’m working on a project that I already have a good handle on because the Application Model in Storyboard is based on the content from my layers view in Photoshop.

With Smart Objects, I’m able to go back into Photoshop and make edits to my Illustrator content or layer effects and bring them back into Storyboard. Because I can import multiple Photoshop files into Storyboard, when I complete one section of the UI design I can bring it into Storyboard and start adding functionality. At the same time, I can continue to work on the UI design for the next section and bring the UI together in Storyboard as each part is ready for import.

What’s extra great?

The Photoshop re-import process. If I change the look and layout of an existing design in Photoshop, I can re-import the artwork into my Storyboard project. The project compares itself to the new Photoshop file and allows me to replace existing content with the new designs and updates the look, size, and positioning from my redesign and retain existing functionality.

Add and refine motion easily

animation_timelineMotion makes a UI come to life, so it’s important to have a workflow that includes working with animations. In Storyboard, working with animations isn’t convoluted. If I want to create, edit, or preview an animation, it’s pretty straightforward. When recording an animation, changes made to the UI are recorded as animation steps and saved. The animation timeline view shows the saved steps. Here I can see the steps in relation to one another and make further edits to the timing, easing rates, and values. Using the animation preview, the animation can be played back, paused, and scrubbed through so that I can see how all of the movements work together.

What’s extra great?

I can quickly create a reversed version of an existing animation, finely tune the ease rate of motion, and apply a single animation to multiple model objects. These capabilities allow me to easily control how things move without having to touch any code.

Layout and position content precisely

When the design is complete, most of the UI layout has been established. Further refinement like adding logic, motion, and additional content is easy with the layout and positioning options in Storyboard.

reference_calculator

Reference point location – The reference point location might appear to be a small detail, but it does a lot for controlling the position and transformation of an object. The reference point location icon is available in the Properties view for dimensions. The top-left reference point shows the x,y location of an object, but selecting another reference point shows and applies transformations based on that relative point. If I want to increase the width of a control by 50 pixels, and have the center point remain the same, I shouldn’t have to move the control 25 pixels to the left and then add 50 pixels to the width. I simply want to add 50 pixels to the width that expands from the middle and let the software figure out the rest.

Property value calculator – An easy way to control the values of a property is to calculate that number in the property field. If I want to move a control 20 pixels left to a third of the way across a screen designed at 720p, I could go to the properties view X: positioning and enter 1280/3-20 (which is the screen width divided by 3, then subtract 20) or enter 1280*.333-20 (which is the screen width multiplied by one-third and then subtract 20) and the X value will automatically calculate 406. If I want a circular progress bar to fill up 43%, I can enter 360*.43 in the Rotation field and the angle will calculate to 154.8º.

Applying some quick math makes it easy to land content in a specific spot. I can apply it to an alpha value, apply it to an offset, or apply it to the timing in an animation. It saves me from going to a separate calculator app, writing it out, or doing it on my phone.

What’s extra great?

Q: Does it use BEDMAS? A: Yes, it does.
Q: What the heck is BEDMAS?

Enhance user experience with actions

It becomes increasingly engaging to work on a UI as it takes on more functionality. Adding actions to the UI improves functionality. There are a lot of combinations and triggers that make actions happen so I’m not limited by what I can do.

Storyboard provides an extensive list of trigger events that goes beyond the basic taps and gestures that are available in most prototyping tools. Many things can trigger an action, such as the completion of an animation, arriving at a particular screen, completed timers, scrolling starts or stops, etc. With a trigger in place, I can select an action. Actions can call an animation, stop an animation, go to a new screen, change some data, start a timer, or, for those who are are scripting savvy, call a Lua event. Depending upon the type, an action’s parameters can vary. An action parameter might cause something to loop, indicate a direction, set a duration, or select an external file. Events can happen any number of ways, to any of the model objects that trigger an event.

What’s extra great?

I can copy an action from one control to other controls. For example, I can copy a screen change action from one button and paste it to other buttons to give them the same functionality.

Make it real. Touch the design.

Working in design, sometimes your work is used only as a representation of the UI. This isn’t limited to UI design. Mock-ups, presenting artboards, and showing prototypes for apps or websites are great, but at the end of the day, seeing the design in the real world and its surrounding context is awesome.

Storyboard keeps the design in the designer’s hands, which enables me to deliver my design vision to the final product. In traditional workflows, this often isn’t the case, and instead, the UI on the embedded device isn’t a reflection of the designer’s work. It’s common for designers to create prototypes that demonstrate how the UI should look and behave, and then provide the development team with the UI assets to create the UI in code. It’s an inefficient workflow to have the development team recreate what the designer has already solved.

Storyboard provides tools that make it easy to quickly export applications directly to hardware for testing. Seeing how an animation runs or how a transition looks on an embedded device gives me insight and confidence that the UI performs as intended. Good design principles value function over form. Using Storyboard, I have everything I need to test how my design will function on an actual target device.

What’s extra great?

I can test my application on an embedded board, make changes to the UI, and then export the app back to the board in under a minute.
*Drops Microphone*

To try Storyboard Suite for yourself (and to feel like a design rockstar), start your free trial now and see how easy it is to create beautiful embedded UIs from concept to production.

StoryboardTrialBtn