Blog

14 min read

How Teams Develop Embedded UIs Remotely With Storyboard

How-Embedded-UI-Teams-Develop-Remotely-With-Storyboard

Embedded systems and UI development can be complicated. Factor in social distancing and teams being forced to continue development remotely, and the situation has become even more complex. But it doesn’t have to be! 

Thomas Fletcher, Co-Founder and VP of R&D, talks about the UI development tools our clients and own team are leveraging today to connect remotely and keep embedded projects moving forward, in this edition of Embedded UI Talks with Crank. 

Hear about our newest feature update for the Storyboard IO Connector and its support for TCP/IP, and how you can leverage it to inject events over a network, effectively decoupling the need for the UI and back-end to be physically deployed in the same location.


Video Transcription

Hello everyone! Welcome to Crank Software's Embedded UI Live Series. We're going to be talking about working with remote teams in embedded UI product development. My name is Thomas Fletcher, and I'm VP of R&D here at Crank Software, where I look after our Storyboard embedded UI product development team. I'm going to be sharing some of my experiences working on embedded UIs and embedded development in this series of live sessions with you.

Finding a collaborative embedded development environment

As I mentioned, today we're going to be talking about working with remote teams on embedded UI development projects - a very timely topic, given the current situation of the world where a lot of teams have broken out of the office space and working from home. What I want to talk about today is two different staff members that are part of an embedded UI team and how they might work and collaborate with one another. I'm going to use Eddie and Ingrid. Eddie is my hardware developer. I've got something here that I’m going to put up on the board here. He's working at home, probably in his kitchen table right now with this embedded hardware. This team is putting together a new coffee machine of some sort. He's brought the hardware home. He's working on the hardware, he's working on the I/O controls, the device drivers, and looking at how he's going to get input from the system and be able to relay that data into other parts of the system. Again, Eddie the developer is working from home.

screenshot-thomas-at-whiteboard-1Thomas working on whiteboard, setting up Eddie and Ingrid's work spaces.

Ingrid is our other developer. She is also working from home but has a different role on the team. Her role is to incorporate the content from the graphic design team. Here, we're talking about Photoshop content, Illustrator content, all the graphic design elements that have been put together for this coffee machine product by the UI designer. She's working on integrating the graphic designs and bringing them into the user experience, to reflect the reality of what the team wants to put out into the product. She was looking at different technologies to do so and ended up choosing Storyboard, of course. There are other technologies that you could use for embedded UI development, but in this case, she's using Storyboard to help bring that Photoshop and Illustrator content into the software and then rapidly realized the user interface could be created in a desktop environment, right? Working to take static content and put together animations, user interactions, the screens, the modalities of the application; all of that in the same development environment.

screenshot-storyboard-graphic-designs-photoshopIngrid's UI graphic designs in Photoshop.

Remotely interacting with hardware in separate work locations

So back to Eddie and Ingrid, two members of the team; typically, if they were actually in the office working together, they probably would be working independently. They're working on very different areas and so there's not actually a lot of tight coupling between them. But as a team, they're probably used to getting together and just sort of bouncing ideas off each other. And so that doesn't really change in our work at home environments. I should also indicate here, Ingrid, is just working with a laptop so she doesn't necessarily have the hardware like Eddie does.

Their communication right now is probably done using Slack, right? Probably using Google Hangout, Zoom calls, you know, same as everybody else in this current situation where workers are able to work from home fairly effectively because we can bring our tools with us. They probably have a shared repository for their team, where they put all the content for the UI development for their embedded product, right? This includes whole system images and a whole cast of other team members who are working alongside them.

screenshot-photoshop-design-files-in-storyboard Ingrid's Storyboard development environment including her Photoshop UI design files.

Now, Eddie can work just fine. He's doing C development. He's doing these C compilations. He's interacting with this hardware locally. Ingrid is now doing the same thing. She can work in parallel with Eddie while working on the UI development. Ingrid can be working at a desktop level because Storyboard allows you to simulate and work in an environment that doesn't require the embedded hardware. But part of the essential design choices that go on here is how do we get the backend components and our frontend talking to one another, right? What does that coupling look like? And if you're familiar with backend and frontend terminology in terms of web design - you know frontend is typically talking with the browser, and backend is talking about the server. In embedded development, backend and frontend tend to play off the same rule. The backend is usually the device drivers, the system logic, maybe even in some cases the business logic of what the application is. And then the frontend is other user experience factors. The frontend is what the user is actually engaging with, touching, interacting with, you know, those types of experiences.

Exchanging data between backend and frontend using simulation

The question becomes, what kind of contract do you have for exchanging data back and forth between the backend and the frontend? A typical environment in an embedded capacity might be that the frontend and the backend are tied or coupled closely together, right? It might be that you're using a graphics library or you're calling into C functions, and those C functions are not only manipulating the user interface but they're also communicating data from the backend to the frontend - where we're receiving data from the backend. That's not an ideal scenario when teams are working remotely. It introduces too much coupling. What this means is that Ingrid would actually have to tie directly to a source repository that Eddie would also be working with. At the same time, they have to synchronize their commits. They have to ensure that they're always working in sync in that capacity as a lock step operation, right? Ingrid can introduce only some callback functions until Eddie has the functionality ready to go or else she has to stop it.

A better way of working is actually to decouple this. Keep the same logical operations of the backend and frontend, but instead couple them together with events. The idea here is that having an event flow back and forth, means your event flow is giving you a contract. It's giving you an API. And how those events flow back and forth if you're using an embedded system, you might be using a message cue. A message cue by which the backend or hardware can communicate up to clients like the UI. The backend is interested in data with the series of events and data payloads that represent the system state. And similarly, the UI can communicate back to the backend using a different set of events with different sets of payloads. Right? This type of decoupling means that you're changing the transport, and decoupling it from the actual date of payloads - and the information that's being relayed.

screenshot-thomas-at-whiteboard-3-tcp-ipThomas working on whiteboard, showing how the Storyboard Simulator works.

What this decoupling does is allows a simulation to occur in a much easier fashion. Ingrid can work independent of what the hardware or Eddie is doing, and at the same time, when we look at what Eddie is doing, Eddie doesn't need to know what the UI is actually doing with his data. But the same token, he can have his own feedback loop here and in his case, he's simulating the UI. Right? So in this case, stimulating the backend. And that's that API contract. This is part of a good design.


Replacing simulation using TCP/IP connections and events
Alright. So what I want to know now is what's the next step? Eddie and Ingrid are both developing their software, both working through the system. Eddie's really excited, he’s got his backend working. He's ready to go. He's got data coming live from his hardware. So what's the next step for him? Ingrid on her side, she also has the UI working. It's ready to go. She's been simulating. It has a good feel. She's been working at the desktop level. So what is she doing? She has desktop simulation, working with simulated backend data. What we'd like to do is connect these two guys together. This is scenario number one. 

What we're going to do is set up a development environment where the hardware can actually be providing information to the desktop UI - replacing our simulated backend. Now, this might occur by Ingrid pushing her project into a source repository and Eddie having to pull it out. But that means that Eddie now has to know how to configure the UI and get all of Ingrid’s stuff going. And that's not really his thing, right? Similarly, if Ingrid had the hardware, she'd have to figure out how to get the system image going. Also not really her thing. 

A 4 minute Getting Started video on Storyboard using TCP/IP connection.

So what we'd like to do is leverage the fact that we have a network connection. We already have Eddie and Ingrid talking together in an informal fashion as part of their collaborative, remote team. So let's extend that and push that down to the embedded hardware. Let's push that down into the embedded system that they're building. What we can do is leverage, for example, a TCP/IP channel or connection to send the events. And so, with a little bit of configuration on Eddie's side, what he can do is have his hardware that he was probably already connecting to remotely. He can have this hardware redirect its I/O control over a communication channel that could then be funneled across the Internet into Ingrid's simulated UI. This is really powerful because it means that Ingrid and Eddie can work together in one channel, for example, Slack. They could be talking while Eddie is generating inputs here on their application and those inputs are flowing directly as they would from the hardware, because they are coming from the hardware all the way into the user interface; and Ingrid as the owner of the user experience, is able to monitor that live data. This is data that's really coming from the hardware so she can validate any assumptions that were made around the event API that was previously put in place, right? There's always going to be a little bit of misunderstanding or lack of clarity and the idea here is to reduce that. Ingrid can quickly jump on to UI adjustments that need to be made in her local environment. Similarly, if there's something wrong with the data being generated, Eddie can jump on that in his environment, but they don't actually have to exchange any source code, they don't have to change any configuration information. They can work really effectively in this manner.

Decoupling to work in parallel
It's not just about going one way. We can have those events flowing down from the hardware for the UI in a simulated fashion, right? So that's the first step. We have a clear API and we have event decoupling that gives us the ability to work in parallel. Now two team members are working effectively and bringing their individual elements together - but still separately. What's the next piece? Well, ultimately, what I'd like to do is take that user interface and run it right on top of this hardware. If I could get that stage going, then this specific team is ready to start sharing these two components with the rest of their team members, right? They can now start talking about pushing the source repository without any concern. This decoupling now goes into our third phase, which is running Ingrid's UI on the hardware. I would like to put the UI onto the display. 

Now you might be thinking, well, okay that's easy. I just take her work and she pushes it out into the source repository and Eddie will pull that up. He'll do the configuration and he'll build it. Sure, that's definitely possible. But at that point in time, the whole team is seeing the configuration, even if you're using branches. And maybe there's iterations that need to go on there, some tuning that needs to go on. It's better that Ingrid and Eddie can continue the work effectively coupled together in their independent areas rather than this sort of hand off and say, "Okay, now you try". If Ingrid can stay in the loop on the UI when changes need to be made. And Eddie can be right there to respond, then we know that the hardware interaction and the data going in and out is going to work. What we really are trying to do now is make sure that the visual representation of the UI is accurate, that it's actually looking right on the chosen device.

Communicating is key

To make sure the visual representation of the UI is accurate on the device, Ingrid can deploy straight from her Storyboard development environment. Again, we have a network connection so she can deploy from her Storyboard environment all the way down to the target hardware using our secure copy and SSH connections. Eddie can sit there watching the hardware or even better if he takes a webcam, just like we have right now, he can point it down to the hardware and continue to have that Slack conversation with Ingrid while she's watching the hardware looking at the interaction. This type of transfer allows her to control what's being pushed to the embedded target. It allows Eddie to really focus on what the system is doing and what it needs to be doing. They can work collaboratively in that way, and see their UI project really running on the hardware. That's really three distinct steps to move it along. 

screenshot-thomas-at-whiteboard-4Thomas displaying how decoupling and events allows UI development to occur in parallel.

Now, of course, the glitch in all this is that Eddie was so excited about getting his data going from his regular I/O connectors (that was really his sole focus). He wasn't focused on the UI. And maybe to that end, he actually totally forgot about doing the touchscreen driver. And so now that work is still pending. Ingrid has already pushed her UI to the hardware. What is she going to do now, right? It's a touch interface, perhaps, that they want to be able to actually not just see the data coming from their sensors into the UI but actually simulate what the user experience looks like. She could do that too because again, for the event driven nature, we're just going to leverage the same communication channel we had going on here when she was running the UI on her desktop down with simulated data from the hardware - providing input to it and receiving data from it. We could take that same channel so she can generate input from the UI. A little awkward here, we're down below the whiteboard. But she could generate the input from the UI and she can do that directly from her development environment. We're using an I/O connector that could generate events or she can provide Eddie with perhaps an automated playback file; something that allows them to take events that would be normally generated by the user touching on screen and inject them into the user interface. So, these are technologies that, yes, Storyboard can do. But in general, when you're doing your embedded UI development and you're talking about decoupling your teams apart from one another; whether it's in the office so they can work in parallel and work effectively, or in this case, because we're all working from home and we're actually physically distancing ourselves, we can't actually just come over to someone's desk to look and say, "Okay, can you run this on the hardware now? I want to take a look at it," right? It's a slightly different mode of operating but it's not actually too distant from a normal operation you'd see and actually, everything you're seeing here is really good practice from an embedded architectural design perspective to get that decoupling, and to allow people to work independently in a parallel.

Wrap-Up 

To summarize, keep your abstractions clean to allow them to be decoupled for the event-based interface. Keep your options open in terms of modes in which you can share elements of the frontend and the backend. Be able to test independently to push your way through your work and then step-wise, move your product along without necessarily engaging the whole of the team. Because at the end of all this, when we get to them to this point, we will have something that is almost product ready, right? It'll be ready to go into a repository to be shared with everybody else on the team. Eddie and Ingrid can then be very, very confident that what they built is solid and it is working on real hardware.

Thank you very much for your time today. Like I said, we'll be doing these sessions on a bi-weekly basis. I'm Thomas Fletcher from Crank Software.

New call-to-action

Originally published: May 7, 2020 2:19:33 PM, updated 05/07/20

Thomas Fletcher
Written by Thomas Fletcher

Thomas is a co-founder and VP of Research and Development. With 20+ years in embedded software development and a frequent presenter at leading industry events, Thomas is a technical thought leader on Embedded System Architecture and Design, Real-time Performance Analysis, Power Management, and High Availability. Thomas holds a Master of Computer Engineering from Carleton University and a Bachelor of Electrical Engineering from the University of Victoria.

Post a Comment

professional-services-embedded-ui-application-design-development
New call-to-action

Featured