Automated Testing at Crank!

Here at Crank Software, we are continuously updating Storyboard in order to deliver a better experience for our customers every day. In preparation for the upcoming Storyboard 3.2 release, new features are being added all the time and the Storyboard Engine has to be tested against all our supported platforms to make sure that the latest changes don’t regress existing behaviour.

Our current continuous integration system already runs automated tests on self-hosted platforms such as Windows, MacOS and Linux. We wanted to expand the automated tests to the runtime platforms and reduce our manual testing using the same framework. To start things off, we set up two Texas Instruments AM335x Starter Edition boards in our lab which should remain there for the most part. These boards run a Linux distribution and QNX 6.5 respectively. We run our tests on Linux using the OpenGLES 2.0 renderer and the FBDev renderer. On the QNX side of things, we have started testing using the OpenGLES 2.0 renderer for now. These boards are connected to the buildbot which issues the commands for the tests and records all the data received. This is useful because we can see what our data generally looks like over time. For example, we can create a graph that plots performance data over time. It is possible that the tests all pass but performance may take a hit. When we notice something like this, we take the necessary steps to fix it.

By adding our embedded targets to our automated tests we can perform testing more quickly, effectively and there is less room for error once we have it all set up. This should ultimately help us create a better product for everybody.

— Huzefa

Popularity of JavaScript is rising

The Tiobe Programming Community Index rankings show that JavaScript’s popularity as a programming language is on the rise. It has recently risen in to the top 10 of popular programming languages based on searches on the web. This article outlines the top 10 languages and goes over how the chart is tabulated.

Given the line of work that we are in here at Crank, it’s actually surprising that JavaScript isn’t higher. There seems to be a movement towards the language as a way of providing content to end users, and given the importance of the internet and it’s content to everyday life, JavaScript should be more of an integral part of the programming culture.

One theory as too why JavaScript isn’t higher is that it is fairly well known, and therefore there aren’t a lot of searches performed about JavaScript. That may be a weakness in the way that the Tiobe Programming Community determines it’s rankings.

–Rodney

Should a proven UI be updated?

I was reading some articles concerning the up and coming release of Windows 8.1 and the much heralded return of the start menu button. It would appear that Microsoft has heard the complaints of it’s users and now wants to backtrack on the decision to remove the button, and push that functionality back in to the next release of Windows. This got me thinking, why did they remove the button in the first place? Why did they feel this was a good decision at all?

The line of thinking when they released Windows 8 was that you would work in the environment that you developed for, if you developed for Windows 8 mobile, and that by offering only one product across the board that it would allow for them to make a better product as they only had to test one instance. The biggest reason though was that users would be accustomed to everything Windows 8. If you used it on a desktop it would be the same on a tablet or on the phone. You only had to learn one UI and you were good to go across a multitude of devices.

Those reasons do make sense but where they fall a part is that a mouse/keyboard driven UI is inherently different from one that is driven by touch. By trying to accommodate both, you are going to limit that device that is rooted purely in only one input model, and that is the majority of the devices out there. I see very few tablet users using a mouse. They may hook up a keyboard, but that is usually only when they are writing an email, or taking a note. Most of the time they are in a purely touch driven environment. Conversely there are very few laptops and desktops out there that offer a touch screen as a standard offering.

An example of this difference in work flow would be launching an application in Windows 8 that you don’t have an icon for. You first have to move the mouse pointer to the upper right corner. Then you have to click on the all applications button. Then you have to search through the apps by scrolling sideways until you find the one you are looking for and then finally click on the correct icon. You could search for the app, but that is still a navigate to the upper right corner, click in the search box, switch to typing on the keyboard, and then click on the result. Also, while you are doing this, you don’t have a full view of your desktop, so you may lose sight of the information that prompted the application switch in the first place. However, this process works in the touch environment. Tap the upper right corner, tap a button, and then swipe through the list until the app is found and tap on it. It works for a touch screen, but for a mouse, a simple menu that you click on that presents a list of options is the better way to go as moving a mouse is more of an impediment than touching a screen. You want to limit the distance that a mouse pointer has to move, and also limit the context switches between keyboard and mouse.

So really it isn’t surprising that there was an outcry from the mouse/keyboard users about the disappearance of the start menu as it is a better workflow for them. Also, Windows 8 isn’t the only OS that has tried to update it’s UI to make it more touch friendly. Ubuntu has been releasing Unity for a couple of years now, and people are still having a hard time getting used to it. So much so that people revert to using gnome-shell, which is better but still not ideal, or they install something like Linux Mint which offers an underlying Ubuntu OS with the previous panel style of the UI. In the end it would seem that if you have a UI that works, that allows for a good work flow, you should tweak it, but not overhaul it, and then offer a separate interface for devices that have a different input model.

–Rodney

Flat UI design the new normal

Wired has posted an article concerning a recent shift in UI design. Most of the UI’s that are designed for devices nowadays are flat. The 3D effect, shadows, and gradients that used to commonplace in most of the UI’s beforehand have now disappeared for a cleaner minimalist design.

They cite a couple of reasons for this shift. The first reason is that as screens get smaller, the minimalist approach allows you to do more with less screen real estate. I’m not sure sure I agree that screens are getting smaller, but I do agree with the logic that the less cluttered a UI is, the more information you are able to display.

The second reason the article gives is that because of the wide adoption of portable devices, the need for extra information in the design of the UI is no longer needed as a majority of users are now familiar with these types of devices, and therefore can navigate them without a bunch of guidance through the UI. This actually makes a lot of sense to me. As we become more and more accustomed to what a device can do, we tend to move towards functionality as opposed to look in terms of rating a device a success. If the device can do what we want it to do, well then we are going to be happy with that device.

That doesn’t mean that a devices UI can be a mess. People will still want things to look nice. They just won’t be clamoring for over the top effects to get to the information that they want. I wonder if summer blockbusters will ever take the same approach?

–Rodney

Storyboard RCP Stands Alone

Tasked with spearheading a new branch of Storyboard development, my objective was to shed the baggage of an IDE dependent set of plugins and move towards the development of an application born for the sole purpose of Storyboard project creation. At the same time, we wanted to migrate from the 3.x framework to the newer 4.x version, all the while using a new system to build and package (Tycho).

Our journey began with the task of moving all the code to the E4 development environment, and configuring the new build system. Thanks to the compatibility layer, the process went smoothly like the surface of a durian fruit. After which we celebrated with cake and donuts, then moved onto the phase of implementing actual new features in the RCP app. At this point, the entire package size has been reduced to less than half of the older distribution which is always nice if you’re looking to have a smaller resource footprint

Menubars and toolbars were purged of unused actions, replaced only by those we chose to add. Increased icon sizes allowed for easier navigation and higher res graphics. Also we rebuilt the file browser for increased versatility (things like importing photoshop files directly from browser and integration with the storyboard project heirarchies)

Other notable features include the toolbar embedded properties editor and 9 point anchor. Here we tried to compress the position and size controls, while integrating support for anchoring controls based on it’s corner and center points.

Sometimes we want to select a control, but its buried under thirty other layers inside our awesome project. Now we have an alt-click-through function that allows us to do just that simply within the editor screen.

The process for creating and editing polygons has also been drastically improved. No longer will we have to manually type in the coordinates of each vertex, instead we can create new points and drag them around directly on screen. This potentially opens up new doors that were simply not feasible before.

Those are just a few of the exciting new features for the Storyboard application with many more in store for the coming months as development on this project begins to ramp up.
–Ray