Ishtar Commander a year in review

Today marks exactly a year since I started work on Ishtar Commander for Destiny. The first version did not ship till September 26th 2015 a few weeks after Bungie gave us The Taken King. Although I have worked on designing mobile apps for nearly 2 decades (welp!), this has been the first app I also developed myself. During the day my work has always been in massive teams where everyone had, in my opinion, over specialised roles. So on Ishtar I have got to wear every hat, own every decision and learn from every mistake. It has been ridiculous fun, brought me into contact with many wonderful people and resulted in an app that is very close to 300,000 downloads from unique accounts.

Recently I have seen several emails where people have asked me if I still love Destiny and am still motivated to continue with the app. Despite this being a pretty dry period for content I am if anything loving Destiny more than ever. The changes in April have made the game significantly more rewarding and despite being mediocre at best at PvP I still feel I am slowly progressing there. Rise of Iron looks to be a great update and will bring plenty of new content. I also expect it to lead to plenty of new ideas for the app. However even without new ideas there are still plenty of significant gaps in the app. Item distribution, vendor details, search, bungie forums and many more things that are only missing due to lack of time to add them. Only kidding about adding the Bungie forums. Screw that toxic hell hole. 

So I am not going anywhere, I am still motivated and Rise of Iron should turbo boost my app creation ideas. As the app shipped after Taken King and only grew in reputation in the past six months I have high hopes to see it jump in popularity when the new expansion hits. As people rush back their friends can tell them to check out Ishtar. 

I just want to thank a few people. First of all the amazing team who beta test the app, give me feedback, point out when a pixel is out of place and tell me to chill out when I am being a grumpy douche rocket. Recently they are also helped localise the app and provide some amazing enhancements to the graphic design. You are all amazing and it never occurred to me when I created the app I would have this private mini community who mean so much to me. I then want to thank all the kind people who have contributed financially to the app via Patreon. Recently I have been able to use this money to do thing such as renew my Apple dev account and buy a few pieces of software that mean I can spend less time on boring developer tasks and more on errr fun developer tasks. The other 90% is used to buy me chocolate. Finally I want to thank all those people who follow my nonsense on twitter and leave great feedback on Reddit. Let's make the next year awesome my fellow guardians.

 

Security and the Bungie.net Destiny API

Since releasing Ishtar Commander a steady stream of feedback has been received asking why the app needs Playstation or Xbox account credentials (username and password) to access Bungie's official API. This is accompanied by a request to add some form of secure login 'just like the other Destiny apps out there'. Yet there is no secure login for the Destiny API, only a mistaken idea one exists.

login.jpg

Above Ishtar Commander can be seen on the left and the official Destiny companion on the right. Both need your login credentials to work, but is one safer than the other? The correct answer is NO! Based on the comments on Reddit and emails this is a topic where some important details are missing and it leading people to think the visuals of an app can magically make it more secure.

The following post lays out why your details are needed, why there are security implications and why there is nothing 3rd parties can do till Bungie provide an alternative. I have tried to write it to be accessible by anyone that plays Destiny. You don't need to be a technically minded or a software developer.

Some Destiny apps don't need any passwords, why is Ishtar Commander different?
Bungie provide a web platform which has two types of end points, public and private. End points are just a way of getting specific data. For example there is a public end point that will show you your kill/death ratio in the crucible. Private end points need you to be logged in and offer things such as seeing the contents of your vault and the ability to transfer items to different characters. This is to stop mischievous people seeing and moving all your items around just by knowing your username. Private end points can only be accessed with your PSN/Xbox credentials.

Why do I have to type them into this app, isn't it possible it could be capturing my details?
To access your Bungie account Ishtar Commander needs 3 web cookies called bungled, bungleatk and bungledid. These do not have your PSN or Xbox details in them. They are just secure tokens so the next time Ishtar Commander connects it does not need your credentials, it just needs the cookies. So how does Ishtar get the cookies?

The only way to get the cookie values is to read them from the web browser’s cookie database after it has logged into bungie.net. The only way to log in to Bungie.net is with your PSN/Xbox credentials. So all mobile apps have to have some way of logging into Bungie.net that gives them access to this cookie database. Ishtar Commander has a custom view, while others use a built in browser view. As they are 'inside' the app that means your username and password could be captured by the app. There is no difference between a custom UI and a built in browser they can both be manipulated to capture your details.

This is rubbish, I know PSN and others offer a secure Oauth login.
Oauth login is only secure if used from start to finish. For the Destiny API access this breaks down at the point where the cookies are needed. To capture these cookies some form of built in browser/cookie database is needed. Sorry to repeat myself, but it seems many people see the words 'oauth' in a web browser URL and think that automagically makes everything secure. Bungie could and hopefully will add something like oauth in the future. But they don't. Instead the only way to access the api is with 3 cookies that cannot be grabbed without using your PSN/Xbox credentials.

What about using mobile Safari?
In the past you may have used other apps that would bounce you over to mobile safari and after logging in they bounce you back to the app. This is a secure system based on trusting Apple and mobile Safari. This cannot work for the Destiny API. 3rd party apps are not allowed access to mobile Safaris cookie database and therefore cannot grab the 3 cookies. Even if they could Apple now reject all apps that bounce via Mobile Safari under the claim they offer a poor user experience.

What about the new Safari View Controller?
In iOS 9 Apple offer apps a way to embed the secure safari browser. However being secure the app cannot access the cookie database. It cannot get the cookies. This is not an option. You may have seen Instagram and others can use this. That is because they have a true oauth solution. The Destiny API does not.

I have read all this and even though I understand it isn't secure I would like an inbuilt web browser to give me a false sense of security.
An inbuilt browser UI may come later. But the reasons for this are for useful functionality such as being able to accept the Bungie.net user agreement. Currently Ishtar won't work if you never logged in and accepted it. 

Does Ishtar Commander store my PSN/Xbox credentials?
No. The instant you login to Bungie and Ishtar gets the 3 cookies your username and password are discarded by the app. These cookies are then used to access the Destiny API. They cannot be used to access anything else. After about 19 days the bungleatk cookie expires at which point you are asked for your username/password again to get a fresh set of cookies.

Does Ishtar Commander use my PSN/Xbox credentials in a secure way to get the cookies?
Yes. All communication uses HTTPS and nothing is sent as plain text.

How do I know this is true?
It is simply a question of trust. If you trust the author then use the app. If you don't then don't use the app. But the same goes for all the apps that use the Destiny API. They can all capture your details if the author is evil as explained above.

What about if you support 1Password or some other password manager plugins?
These plugins just paste your credentials into the app. There is no extra security here. As the paste happens the credentials can be captured.

Could you open source the network code?
I could, but I am not going to as there is no way for anyone to validate the same code is in the app on the App Store. It would just be security theatre. The look of security without giving anything real.

So this all comes down to implicit trust?
Yes. If you are worried then don't use the app. If you do understand the issues and can see that this is the only way for an item manger to work then please do use and enjoy the app. Hopefully Bungie will offer something better in the future, but right now they don't.

Twitter in 2015

Although Twitter is a net positive for me, the past year has often left me struggling to enjoy its use. As 2014 came to a close I started to unfollow a lot of people. To do this I had to come to terms with the fact I am twitter completionist. Just as with getting all the stars on Angry Birds or even eating the last strange dark sweets my kids don't want, I just have to read every tweet in my stream. A stream that is serval hundred messages every day. A little voice tells me if I just casually glance over messages I will miss some vital tweet that leads to fame, glory or a funny video. 

The criteria I used to unfollow was roughly:

  • Does this person tweet rarely? If so they can stay.
  • Does this person tweet a lot? If so is the majority personal or technical? If it was mostly personal then unfollow.
  • I follow a number of people in other countries and they will tweet about various cases of injustice. These stories really bother me and if I could do something about them I would. However I have zero influence over things outside of the country I live in and maybe my original home country. So I have decided to reduce who I follow that tweet non stop about issues I have no influence over. I am trying to make more of an effort to take part in local affairs though. And not slactervism in the from of tweeting. For example at the end of 2014 I joined a local demonstration to push back on a proposal to end the special art and music classes for kids here in Oulu.
  • Does this person tweet pictures of dead or dying children? This is one of the major bummers of 2014. Now that most twitter clients show a large thumbnail of any linked image you cannot avoid these. Yes war is terrible. But it is possible to care about social justice and many other issues without pushing horrifying photos to people that follow you. At the end of the day people can tweet what they want, just as I can choose who to follow. 

Now there has been several exceptions. But it has reduced my stream by a subjective 80%. Even on days Where I only check twitter once in the morning and a couple of times in the evening it is quick and easy to read any new messages. Much less time goes on reading twitter and I don't feel like the ills of the world are pressing down on my shoulders. I might have a compulsion to read everything, the fact people who I don't follow might be saying brilliant things turns out to not bother me at all.

Always do your homework

Occasionally I get asked for some advice about how to be better at design. My advice is simple, do your homework. 

Know the history of the problem area

People imagine a lot of design involves a deep understanding of psychology and a mystical expensive process to find great solutions. But even if this were the case you don't want to waste time re-inventing the wheel. At the start you want to know what are the best existing solutions to the problem you are solving. You should have a look at existing solutions, especially those of your competitors. Let say you are working on a payment design and want to have the best possible credit card experience.

Amazon.com credit card entry UI

Amazon.com credit card entry UI

Apple.com credit card entry UI

Apple.com credit card entry UI

Above is the credit card entry page for Amazon.com and Apple.com. Ignoring that the Apple site asks for the security code, can you see another key difference?

The Amazon interface requires the user to first select what type of card is being used. Visa, Master Card or Amex. If you get it wrong the process fails with an invalid card error. The Apple version takes advantage of the fact that each card issuer has their own unique pattern of numbers. It is not necessary to ask the user a question they might get wrong as you can detect the card type from the number they enter. Design genius is not needed here, just doing your homework.

People don't do their homework for a variety of reasons. Laziness and an expectation that being smart will get you through are common. But design projects have a limited amount of time. Don't do your homework and you end up going down the same wrong path everyone else already travelled. If you do your homework you can use the time to work on totally new issues that can make a product better than the competition.

Homework isn't just for the start of a project.

An easy mistake to make is that homework is just for the start of a project. In fact the start can often be a terrible time, as this is when you understand what you are designing the least. As you get a better idea of what the really tough problems are go and look again at how other products tackle the same issue. I often find many things I missed the first time and also have a greater appreciation of why certain products work the way they do.

Make sure doing your homework is part of the process

An easy way to ensure you do your homework is to make it part of the design process. Make sure you are constantly up to date on how your competitors products work as well as any other relevant design solutions. As most people don't do their homework it gives you a surprisingly large advantage from a small amount of effort.

Don't believe me? Here is a screenshot from todays iOS 8.1. Look carefully. Can you see where someone at Apple is not doing their homework?

iOS 8.1 App Store credit card UI

iOS 8.1 App Store credit card UI




iPad Mini 3: Not Our Best iPad ever

"iPad Mini Retina has scored an unbelievable 100% customer sat [satisfaction]" - Tim Cook 16th October 2014

"iPad Mini Retina has scored an unbelievable 100% customer sat [satisfaction]" - Tim Cook 16th October 2014

I'm a fan of Apple and a fan of the iPad. Apple are at their best when their products feel like they are trying to be the best they could possibly be. But that does not seem to ring true about the new iPad Mini 3. Before it was introduced Tim Cook provided some background:

It’s always been a unique blend of simplicity and capability. But while the iPad has been beautifully simple on the outside since the very first one, it has advanced technology just jam packed on the inside. From Apples custom designed powerful chips, to the ultra fast wifi and cellular connectivity, to the incredible iSight and FaceTime cameras.... But what’s more important to us is that iPad has consistently been rated number one in customer satisfaction. This is what makes our hearts sing. And iPad Mini Retina has scored an unbelievable 100% customer sat [satisfaction]. You just don’t see these numbers in customer sat. And so why are so many iPad users so satisfied? We think it comes back to this unique blend of simplicity and capability.
— Apple October 2014 event

While the new Air 2 saw a new custom designed A8X chip, the Mini 3 has last years A7. While the Air 2 got new blazing fast 802.11ac wifi, the Mini 3 is stuck with the same speed it had last year. Finally while the Air 2 got upgraded with incredible new iSight and FaceTime HD camera the Mini 3 was stuck with the same ones as last year. In fact apart from optionally being available in gold and supporting the new Touch ID fingerprint sensor the Mini 3 is identical to last years iPad Mini 2. 

It's hard to follow the logic of being pleased a product got 100% satisfaction, to then introduce an unsatisfying update. It's hard to follow the logic of success through increased capability, with a product that has none. Is the Mini now an unloved sibling to the Air? Is this a sign, just like the paltry 16GB of storage in the entry level product, that it has been priced too aggressively to warrant the expected upgrade? Is this a sign of product managers not thinking about the range as a whole and getting their way to differentiate the product against the Air, by crippling the features it should have had? Who knows, but one thing for sure is that this is not Apples best iPad ever.

 

 

Thinking About Adaptive UI's Part 3

After the basics things start to get more interesting from a design perspective.

orientation.png

Portrait and landscape orientation

One way of looking at the portrait and landscape orientation is they are nothing more than a change in aspect ratio. But especially with a phone the design is not primarily an issue or a narrower or wider screen, but how the device itself is held.

One and two handed use

Phones are primarily held one handed. It is not unusual to find yourself with only one hand to operate your phone. Be it carrying the shopping or holding onto a rail on the train. The entire interface of a phone, including being able to type text needs to be possible one handed. The only position where you can safely hold the phone in a firm grip and still reach the majority of the screen is when the device is held in portrait.

When held in landscape the phone cannot be firmly held and operated. Landscape on a phone is for two handed operation. On tablets one handed operation is not a design consideration due to the size and weight of the device. However hand position is an important consideration and means key buttons in tablet apps need to close to the edge of the screens where thumbs can comfortably press them.

Thinking About Adaptive UI's Part 2

There are 3 basic elements that first need to be understood regarding adaptive UI's. These are physical size, aspect ratio and resolution.

Physical size

Physical size is measured diagonally from one corner to another of the display. This has been trending upwards for phones.

Aspect ratio

The aspect ratio is the ratio of the length of the two sides of the rectangular screen. If the screen was square they would be equal 1:1. 16:9 has become common on phones where it allows the best compromise between keeping a phone narrow enough to grip in your hand while maximising the amount of space that the screen can take up.

Screen Shot 2014-10-14 at 14.35.01.png

Resolution

This is number of pixels a screen has. This is normally thought of in terms of dots per inch (DPI). This has also been trending upwards and in the next two years 4K (3840 x 2160) with DPI's in excess of 800 are expected. The trend is expected to abruptly stop at this point as the human eye can no longer see any higher.

The next post will show the new adaptable UI issues that emerge as these three elements are combined with how these products are held and used.

Thinking About Adaptive UI's Part 1

Since 2003 I have been working with the problem of adaptive user interfaces. The common examples are apps that work on phones and tablets that can have a range of different screen sizes, shapes and resolutions. Marco Arment in episode 85 of the Accidental Tech Podcast spoke a bit about what he has learned from his previous app, Instapaper. As well as a lot of custom elements this had a separate UI for iPhone and iPad. This was then compared to his new app Overcast and the additional challenges being offered by the different screen sizes on the new iPhone 6 and 6 plus. A variety of reasons were given for minimising the amount of UI that is custom to a specific device and instead just having a single universal adaptive UI that works on all iOS devices.

On first listen I broadly agreed with some of the reasons given for this and kind of agreed with the conclusion, even though I also hope it is wrong. I want apps that get the best out of my iPhone and my iPad. Not just a one size fits all kludge due to the cost, or questionable user value of having anything device specific. It kept bugging me and I remember how this is a surprisingly complex area. A lot of people are in the dark about all the issues here. 

Back in 2003 the design brief was to take a touch based mobile app platform (yes they did exist back then) and allow it to adapt to a range of screen sizes. “Apps can scale on the desktop when you resize the window, now just do the same for mobile”. But mobile apps were far more complex. Factors such as holding and using a phone one or two handed needed to be considered. Then there were different types of app. Some could easily scale to use a different screen such as many games, video and the web browser. While others such as the calendar needed major redesigns.

Since 2003 we have seen the emergence of responsive websites. A single site that can adapt to desktop and mobile web browsers. As Apple have increased the range of screen sizes and resolutions with the iPhone and iPad they have also introduced a single new solution to adaptive needs called Auto Layout. Today the new naive call is just make your app scale like a responsive website, or just use Auto Layout. But it is not that simple. There is no one size fits all. Or at least there isn't if we want to not wave goodbye to a whole class of interesting applications. At this stage in the evolution of apps shouldn't we be seeing more interesting variety, not less?

The following series of posts is an attempt to share some earlier learning. To describe why mobile is different to the desktop and the web. To explore how different types of apps have different problems. I then want to research some of the key solutions to making adaptive apps and share them. Right now I am unconvinced that there is a magic technology that can solve all these issues. That new thinking and new solutions are needed. As well as that device manufactures have a responsibility to limit the range of devices they offer. To better consider these problems and not burden app creators with a wide range of screens as if these issues are not that important or have been satisfactorily solved.

Is there such a thing as Dangerous Knowledge?

It is not unusual to be told by user experience designers that not only do they not need to learn to code, but they would be less effective if they did. Often with the claim that the process of learning to code makes you overly concerned with the workings of computers and not the people that use the products. You gain an engineering mindset, whatever that means. This post is not about addressing the merits of this specific claim. Instead I want to touch on an underlying assumption. The idea of dangerous knowledge. That there are things we can learn that will in some way corrupt us and make us worse at what we do.

Dangerous knowledge is an ancient idea. It is often used to gain or stay in power. I am reminded of the manager who once told me in all sincerity ‘if I told you and the team everything I know, then what would be the point of my role?’.  But it seems even worse when we decide for ourselves that learning can be bad. That smart people can take anti-intellectual positions. That we don’t even blink when many people who are part of delivering software products say proudly ‘I know nothing about how software works and that makes me better at my job’.

So lets not be so foolish. Maybe you are too busy to learn something new. Maybe deep down you are scared to learn. But these are problems that can be overcome. That ultimately taking a bite from the tree of knowledge is not how we fall, but how we can better ourselves and the products we work on.


The Joy Of Science Podcasts

The news is constantly full of depressing and confusing news. "Humanity is doomed due to climate change", "Diseases like bird flu and ebola are going to kill us all", "Antibiotics will soon stop working". Despair and be frightened! But I have found a wonderful antidote to this and one that never ceases to brighten up my week. This is the wonder that is science podcasts.

What I appreciate is:

  • Serious topics presented for non experts. If a topic needs a lot of explaining, then that is just what you get.
  • A lot of fear and nonsense in the media is put in sensible context or simply dispelled.
  • The range of fascinating areas I get to learn about that I often never even knew existed.
  • Interviews with the actual scientists. You often get to hear much clearer limits of what their research does or does not show.
  • A lot of news about research that may well be years from impacting our lives, but clear progress is being made.
  • Fascinating new books to read.
  • An understanding that there are a lot of people out there working hard to make the world a better place and succeeding.

Here are three that I cannot imagine my week without:

Science for the People: This is a weekly radio show from Alberta Canada. The format is two guests are interviewed one after the other. They often have an interesting book they just published or are an expert in an area that is currently hot news. If like me you are a coffee addict the episode on caffeine might take your fancy.

The Naked Scientists: As well as a weekly news program on the BBC there are also a number of specialised programs on areas such as  neuroscience and genetics. Covering a lot of the latest science news, interviews with scientists directly working in each area and plenty of background information to help you understand the issues at hand. It is often a great podcast to listen to with your kids. Discover why spaghetti always breaks into three pieces or if reading as a child, will make you smarter as an adult. They cover it all.

Science Weekly: From The Guardian newspaper this weekly podcast covers the latest news.

Were Apple holding back on what makes the Watch special?

Regarding the Watch is the idea that Apple held back a key aspect of the product. Not a standard feature, but something major. The kind of thing that would elevate it from mere accessory, to something that one day could replace an iPhone.

But in todays busy world you only get to launch a product once. The fear of bombing is enough to stop anything key being held back. In fact the problem is often just stopping this fear from resulting in an unfocussed presentation which does not zero in on the two or three elements that make the product understandable and desirable.

Do Apple even have a history of holding back major aspects of a product? Sure they don't detail every tiny feature. Yes with iOS beta's they do not reveal aspects that give away details about the soon to come iPhones. But major features that are only announced at shipping?

What about keeping details back so competitors cannot copy them before the product launches? Again this rings hollow. Apple have a whole lot of other issues medium term if the Watch can be cloned in a mere months. Compare that to iPhone and how it lived up to being five years ahead of the competition. 

Looking back at the video of the event you can see an Apple that is going out all guns blazing. The event itself was back at the Flint Center, "... on this stage we introduced the iMac. Which signaled the rebirth of Apple. Today, we have some amazing products to share with you. And we think, at the end of the day, that you  will agree that this too is a very key day for Apple."

Then Tim Cook announces  (55:45) "We have one more thing". It made me wince to hear this classic Steve Jobs line. Was it too personal to Jobs for it to be used again? Not on this day.

Compared to the focused beauty of the iPhone launch, what followed was a laundry list of features making it hard to think back to anything particularly memorable. Overall, far from being a day where anything was held back, it was a day that sorely would have benefited from it.

On struggling to understand the Watch.

It has been three weeks since the announcement of the Watch and if anything my thoughts have grown more confused. The two perspectives that interest me are is it just going to be some semi interesting accessory, or if it is going to be a significantly important computing platform. Put another way is the Watch going to be in the same category as say the Apple Cinema Display, or will for some of us it be replacing our iPhones?

Back in 2000 the design department I worked in purchased a Creative Nomad Jukebox. It was an early MP3 player with a reasonable sized hard-disk in it. It was horrific to use even for a nerd. But having instant access to thousands of songs was clearly amazing. When in 2001 the iPod launched it was obvious how useful this was. Apple had taken the wonder of having your whole music collection with you and not just made it a joy to use, but packaged it up into a fashionable and desirable device.

My entire career has been spent working on touchscreen based smartphones. They were often difficult to use, even for nerds. But the potential was there. We were all staring into our phones and ignoring friends and family long before it became the norm. ON touching an iPhone in 2007 it was again obvious how great this was. Back then there was no app store, but it didn't matter because the web browser alone was AMAZING. You could blog for weeks just on aspects of the iPhone that changed the game and unlocked all the potential compared to any product before it.

My own pattern for trying to understand new products has followed this. There is an area with great potential, but the current products suck. Then you get a new product and you can see the potential in that area has been unlocked. But with the Watch I just don't see that. I have tried various smart watches and never found anything interesting. They have been horrific in design and use, but without the hint of being an important device. Sure some of the fitness applications are interesting, but this is a micro niche.

Part of me is impressed that Apple have entered an area this early. No one has made it obvious what the potential to be unlocked is. But when I hear talk about this being a big platform. That this could cannibalise iPhone sales I feel lost. I don't know what I am missing here because all I see is something that can complement a product such as the iPhone. What am I missing?

A blog post a day (setting myself up for failure?)

I was reading Anil Dash's article about his experience from blogging for 15 years. It got me thinking about how attempts to blog here regularly have been a dismal failure. So I am going to try an experiment where once a day, for a week, I post something. It might be super simple, but it should be longer than a tweet. Comments are open so maybe they will help provoke some further posts. For now at least I could start with my own confusion on trying to understand the Watch. So let's get this party started...

What would be a dream way of creating apps?

Apps are marvelous. But the way apps are created is still stuck in the dark ages. Apple recently announced Swift, but at the end of the day is this anything more than a catch up with the rest of the industry? The three main hurdles of learning Xcode, understanding how to program and learning the Cocoa Touch framework are still there. As is the lack of any change to how designers and developer can better work together. What would be a genuine game changer?


No environment to set up (Just works).

With Android and even iOS, getting to the point where you can write a line of code, let alone see something run on a device is torture. Setting up a developer environment is countless gigabytes of downloads. Even with Google and Stack Exchange to consult, getting a working setup with all the right SDK's, plugins, certificates and other junk it a chore. 

It shouldn't have to be this way. Everyone should not suffer just to appease developers that want to be able to tinker with everything. Nor should it be acceptable to leave these issues unfixed just because developers are willing to spend a lot of time and effort getting a working setup. 

The dream would be a system as easy as downloading and using a new Twitter client. It should be possible to download a tool and have some code running on an actual device in less than five minutes. This would both make starting to develop apps accessible to ordinary people and also free millions of developers from having to go through the same horrible setup process.


Bring designers and developers together (Tools for app creators).

For historical reasons mobile apps are created by separate designers and developers. There is a lot of nonsense that these separate roles are fundamentally separate and even require different tools. This in turn has had the effect of cementing in place this absurd difference. The question should not be how can we make designers or developers happy? The questions should be what can we do to make app creation as easy as possible? Give everyone working on apps a shared toolchain dedicated to the problems of app creation.


Don't dumb it down (No tinker toys)

The best solution to creating dynamic logic in an app is a modern programming language. Provide just that and a great editor to compliment it. On the other hand the best way to create a visual layout and animations are visual tools. Don't listen to developers so stuck in their ways they want to do everything in code and don't listen to the designers so terrified of coding that this side is hidden away. Give smart people the best possible tools. Also the code and visual side should play together seamlessly. Don't hide the code away or have visual tools that generate un-editable code. Instead of sticking with a fantasy that apps can ever be created without code fix the main problems with writing code and make it more accessible to all. 


Get designers hands on.

It is 2014 and it is crazy that for most mobile apps every UI elements is positioned by a developer. If a designer wants to adjust how pixel perfect a button placement is, or tweak a font they have to ask a developer to do it. In the time it takes to reach an understanding countless other tweaks could have been made. The problem is even with tools such as Interface Builder it is far too hard for designers to get hands on. Editing the code often seems impractical. But if designers can hand edit a web site, there is no reason we couldn't have a system where they could get hands on with the real app design.


A language for thinking about user experiences.

Back in the 70's the first object orientated languages were created to enable the Graphical User Interface. Code objects had the potential to map to real objects in the interface. Over 40 years on this way of developing User Interfaces still dominates and it sucks. When designers run away screaming from the idea of coding this is one of the main culprits. These languages don't allow you to think in terms of what you want. You don't directly express concepts such as text, images, buttons, animations and visual effects. Instead you describe how an abstract engine running on a computer should serve these up. You spend ages translating the 'what' into a 'how' and it gets in the way of coming up with a great design.

The dream would be a language that along with the ability to handle complex logic, would also allow the direct expression of the user experience in terms of what is needed, not how the machine should serve it up. It should provide the basic elements needed to allow people to easily break down even the most complex design into manageable and understandable pieces. Just as with the developer  environment, all the scary stuff should just be abstracted away and all the common parts of a user experience should just work without configuration.


So productive it renders separate prototypes pointless.

There is an idea that fundamentally you cannot try out ideas (prototype) with the same language and tools as used to ship the production version of an app. Instead separate tools and scripting languages are used where you can create prototypes which vary in fidelity. But there is no law of computer science that states it has to be this way. Maybe as engineers are all masochists there has been little pressure to address these problem. But think of the benefits if a production ready software environment was as approachable and productive as the ones for making prototypes. Currently so much work goes into throw away prototypes and then duplicate work goes into recreating them again with the actual shipping technology. The prototypes often cheat and hide designs that cannot be implemented for real. Designers can be found wasting valuable time fixing issues that are part of the prototyping tool, and not the real app. There is often no access to real device functionality in these tools. No real HTTP and network support. No using cool features such as the camera, accelerometer or location features. Often even text input has to be faked. Then there is a lack of access to real data. You can make an amazing video of zippy animations using After Effects. But then the real app has to contend with network connections and they types or real content people own, not the glossy stock photos demos often use. If you could design with a real device and real data these problems would no longer go hidden till late in the design process.


See a design run on a mobile device as fast as you can think it (Live development)

There is this magical yet horrifying point during an apps development. After weeks of design effort a developer creates the first real version of the app. This is then deployed to a phone or tablet, you put finger to glass, and... your app sucks. It should have been obvious, but somehow when the design lived on paper on in a contrived fake prototype obvious problems were hidden. Then you ask for a change and the cycle starts again. Even the simplest change to an iOS app takes about 30 seconds to build and deploy back to a phone. This stop start process makes it expensive to iterate and can try the patience of even the most hardened engineer.

The dream would be seeing your ideas appear instantly on a real mobile device. The speed of thought is perhaps unobtainable, but how about at the speed you can type?


Capable of delivering the absolute best of the best apps (Performance)

The App Store and even the Play Store are brutal places to compete. Just good enough does not cut it. What are the best apps? They are the ones that don't drain the battery, that don't crash due to hogging memory. They open instantly and have the silkiest smoothest most responsive user interface possible. Sixty frames per second with no dropped frames or stutters isn't an aspiration, it is the bare minimum. Not only is performance important but so is the ability to get the best out of the current generation of smart devices which often have amazing graphical power that has gone untapped.


An environment like this would make teams far more productive. It would change the dynamics of how designers and developers work together. It would also be a really great way for people to learn how to write software and turn their own app ideas into reality. Now this would be a game changer. But right now it doesn't seem to be happening.

Why is it taboo for apps on a phone to rotate upside down?

apple-orginal-iphone.jpg

I recently saw a news story that claimed apps running upside down on a phone is taboo. It left me wondering how many people think these are just capricious and seemingly irrational choices and not instead design discussions. Ones often made after seeing the problems upside down apps can cause on a phone.

Think about your phone. At the top is a speaker so you can hear who you are speaking to and at the bottom is a microphone to pick up your own voice. If the phone is upside down your ear won't be near the speaker and the microphone will be too far away to properly pick up your voice. Today phones are mainly just a large screen. When you get a call, a quick glance won't be enough to know if the phone is the right way up. If the phone app is upside down, you can now answer the call and it might take some time to realise why the other person sounds so faint and why they cannot hear you.

You could suggest that in this case the phone app always launches the right way up. Here you get a new problem. Lets say you are writing a text message, but with the phone upside down. You get a call and now see the phone view is to you the wrong way up. You now have to rotate the whole phone to properly view it. It isn't a huge deal, but it is not pleasant. You try and avoid forcing the user to rotate the phone as they use it. This is why if an app does run in landscape, all of its views will also work in landscape. You don't want to have the user press a button in an app and then be forced to keep changing the phone between landscape and portrait. Both these issues go away if you disallow apps from running upside down.

hustle-31-600x331.jpg

Another related issue is why some apps on a phone only run in portrait. At least for the phone app itself there is a clear reason. You normally start or answer a call with the phone held in portrait. But once held to your ear, as seen above, the phone is closer to the landscape position. If you then need to check some info and find your app had now in landscape, it is disorientating. Most people don't realise the phone had been held at an angle during the call. It can be stressful enough talking to someone and trying to check some data. So this kind of confusion is something designers opt to cut out by just having the app run in portrait.

So there you have why it is advised to not allow your app to run upside down and also why its not alway preferable for an app to be usable in landscape. There are good exceptions. It is also why on a tablet without phone hardware, these problems go away and all apps do indeed run in any orientation.

Designers. Get over your fear of code and on with your lives (part 2)

Part of any craft is mastering the materials you work with. A  Chef does not just write out a menu, but is a manipulator of food. A jeweller is not someone who just sketches rings and necklaces, but a metalsmith. Just as the furniture designer is also a professional carpenter. They know how to work with materials. Work that included understanding overcoming the materials inherent properties. The material a User Experience is crafted from is software! This is why designers need to get their hands dirty with it. Yes, it may be  more abstract. Software indeed has differences to physical materials but it also shares some characteristics. 

Somehow the dangerous idea that designers don't need to get hands on with software still dominates the field. My own view is that designers who can get more involved with the implementation of the product are not just better designers but also happier in their daily work. We claim to be creatives, yet what do we create? For many it will just be PowerPoint decks, wireframe views and UI specifications. Are these really the products of creativity? Can you proudly point at the fingerprints you left on a product?

Endlessly debating over if designers code can easily get boring. Once you are working hands on it is pretty clear that a lot of the tools and technology to enable better user experiences are simply terrible. One reason is a lack of designers getting in there, trying to use them and making smart suggestions on how to improve them. What follows is some of the main arguments I have experienced trying to get designers to get more hands on.

"You need a degree in computer science to understand how to develop software."

While the more you know isn't going to hurt you a full degree is not needed. I don't have a computer science degree. Hell, some amazing developers I know don't even have one. You just need time, supportive patient colleagues and decent tools. 

"A little knowledge is a dangerous thing."

At the beginning, as designers try and get to grips with the fun world of software development and the product itself, they are going to say many dumb things. Take version control. Often designers new to it will worry that they now have the power to destroy the entire project by adding some broken code! This is not the point to laugh and call someone dumb. Just as you don't laugh at children when they start to speak. Okay maybe you do laugh, but only to encourage them. We should see the mistakes and incorrect assumptions as part of the path to proper understanding. Trust me, if you are a developer and you think designers are saying silly things, you should be a fly on the wall when only the designers talk to each other. Don't be daft and suggest designers should stay away from engineering issues, go explain how things really work.

"Knowing about development issues makes you less creative."

I often feel embarrassed at how often I hear this claim. The idea here is you need to be ignorant of the daily realities of software development to be able to push the limits. First of all when it comes to pushing the limits of what software can do no one is more active than software developers. Take the world of open source software, where designers are massively underrepresented. Here a constant stream of projects exist due to various developers frustrations with the status quo. 

There is germ of truth here. Engineers are still mostly human. Estimating how hard a problem is to solve, simply by thinking about it almost always proves wrong. What is thought to be easy can take weeks, while the impossible turns out to not just be possible, but sometimes simple. The solution here is not ignorance, but high standards combined with finding the issues worth pushing.

"You cannot stand up for the end user if you also have to worry about development issues."

It is possible to walk and chew gum. For sure anyone can get lost in the details. It runs both ways though. Just as developers can get lost in the details of implementation, designers continually get lost worrying about issues that are not that grand or important to people when the product ships. Loss of perspective is best fixed by knowing about a wider range of issues. Designers don't need to stop standing up for the end user by knowing about the implementation. 

The reality is the separation between designers and developers is what more often leads to products that don't work well for the user. A lack of hands on involvement, leads to poorer communication of what the genuinely important design issues to be solved really are. Focus goes to the wrong parts of the product and by the time the mistake is realised the chance to make significant changes to the software has passed.

There is also a flip side to this. Even the purest of designers, whatever that means, will have some knowledge of what is possible with software. No one wants to look like a crackpot by suggesting nothing but impossible ideas. Experience shows a poor understanding of what is possible just as easily leads to possible ideas never being suggested due to the designer thinking they cannot be done.

 

We live at a time where it is still fashionable to claim there is dangerous knowledge. Knowledge that can make us less creative. Knowledge that would harm us to know. Be it the well meaning manager protecting employees from information that might distract them from their daily work. As well as the bureaucrat who claims some knowledge must be kept secret to protect our safety. But we are not little children and are best able to decide for ourselves what is useful and what is not. Also we need to constantly stride to broaden our education. To open ourselves to new ideas and new skills. It saddens me to no end that so many of my colleagues have crafted a wide range of reasons to push fingers in ears, heads in the sand and cut themselves off from a world of knowledge that could transform their daily work. This is meant to be an enlightened age. It should go without saying that taking a bite from the tree of knowledge is not original sin, but how we liberate and improve both ourselves and the world around us.

 

Designers. Get over your fear of code and on with your lives (part 1)

It is time for an intervention. It is no longer clear if designers are helping or hindering the creation of great products. We have backed ourselves into a ghetto of our own making. It is both an anti-intellectual ghetto where we claim engineering and software development is a dangerous type of knowledge. A type of knowledge that will make us worse at our design jobs. It is also a real physical ghetto, or silo. We have our own separate tools that often don't play well with the ones engineers use. We even sit and talk apart from those that should be our greatest allies in great product creation, the engineers. There is an answer though. We need to learn to code. Yes I can imagine the howls of pain. I think I can hear the sound of pitchforks being sharpened. The torches have been lit and soon a lynch mob will be coming to burn me at the stake. But hold back the cries of heretic and hear me out.

But first I will throw some salt in the wound. Being able to code won't be enough. There are plenty of designers who in some way claim to know how to code. Be it based on a brief programming module at University. Or those who started out as engineers before moving to design full time. It even includes those of you who 'code' prototypes. Yes none of that is enough. Designers need to get their hands properly dirty. We need to be working on the real software that will ship as the final product. This will indeed mean craziness such as installing a software development kit. Or as we pros call it, an SDK. You will have to use and even love version control and yes *gasp* maybe even the command line.

If this is right there should be clear benefits. Designers should find themselves happier, more productive and more creative. We should see designers and developers working better and more respectfully together. We should also see a new generation of tools and technology that allow designers to happily work hands on. And critically we should see many more delightful products.

A lot of the arguments designers have is about how scary some of the tools we have to use to work directly on software are. Again this this is our own fault as instead of giving feedback to improve them, we have just run away claiming it is the engineers job to suffer their use. My own experience is that if you stereotype engineers and designers the type of tools created for them are terrible. It is just blindly accepted that engineers will put up with hard to setup and use tools, while designers will not learn to use something unless it works like Photoshop. Let us return to this later though.

Before we can really dive into this conversation though we need to fully confront the issues around the idea that learning to code can make you a worse designer. More on this soon.

Talking about icons: Between the grid and intuition

At the end of episode #90 (1:05:55) of The Critical Path, Horace Dediu raised the issue that "we don't know how to talk about design, we have a real problem with the language... and even designers have this problem. They can't justify their work because so much of design is ambiguous". It is an issue close to my heart, although specifically conversations about trying to define design quickly lead me to wanting to kill myself. Instead I would rather get back to doing some real work. All I will say here is I remain unconvinced a new vocabulary is needed and that everyone, designers included, should just strive to be clearer in what they say. That is also to say as designers we should always be alert to the every present danger of being overly pretentious and disappearing up our own arse.

Last week Neven Mrgan made an interesting critique of the iOS 7 icons, which was followed up by a dickish follow up which focused on Neven's claim that some intuition is needed to make icons look right. I don't know Neven, but I am a fan of his work and have followed him on Twitter while now. So I am going to restate some of what rung true for me.

With iOS 7, every detail warranted the same rigor toward design. Like refining the typography down to the pixel. Redrawing every icon around a new grid system
— http://www.apple.com/ios/ios7/design/

For iOS 7 there is a new grid system and Apple were so excited about this they felt it worthy enough to use in their public marketingWe see grid systems at work every day in newspapers, magazines and their online equivalent. A grid is used to consistently align the columns and rows of text and images. This grid, or variations of it are then used throughout the paper and magazine and give it a consistent personality. If you ever made an invite for a party or tried to create a newsletter and found the end results looked somehow amateurish, it was probably due to among many other things a lack of understanding of how to use a simple grid to line everything up. This is just a small taste of what grids help with.  

When you start to create software and developers discover grids they get excited as well. They see how everything should be spaced and aligned. And then something funny happens. The designers start to ask for changes that seem to outsiders to randomly break the grid. Just as it seemed those wishy washy designers had brought some rigour to their work they want to stop using it everywhere. So what is going on?

Once you have a real design using a grid system you start to see weird things happen. The grid tells you certain items are in perfect alignment and are correctly sized. But your eyes tell you they are not aligned and mis-sized. What is going on here?

A Necker Cube

A Necker Cube

It starts with the issue our eyes are not video cameras. Our eyes have some inbuilt assumptions about reality and then our brain does a huge amount of work to reinterpret these results which we perceive as the real world. This whole process is riddled with mistakes. If the human visual system was considered hardware and software we would say it is rather buggy. One of my favourite aspects of how poorly the eyes are engineered is that the optic nerve at the back of the eye means we have a large blind spot. Other aspects of the poor engineering of the eye are the wide range of ways it can be tricked in the form of optical illusions. An example is a Necker cube. Look at the cube above and stare at the red dot. The cube appears to flip so that the red dot is sometimes inside, and sometimes outside the cube.

The human visual system is not thoroughly understood to the point where science has  much to say about iOS icon designs, but we do know why it has all these engineering issues. Yes, our visual system was not engineered, it evolved. And although we have good reasons to believe it gives us a pretty good approximation of the real world, evolution does not lead to perfection. These kinds of issues mean you find yourself having to nudge items in a design by a few pixels so they end up looking like they are aligned, even though they are not actually following the grid perfectly. The same goes with the size and shapes we use for icons. A range of shapes may all fit perfectly inside a grid, but factors such as the shape and the colours may mean one shape looks smaller or heavier than another. I am sure there is a research program here and we could spend time trying to develop a range of rules. But I suspect that every shape might need its own rules. And then the background colour and icon colour would also have an effect. And pretty soon a very complex formula, if one exists as all, would be needed. It is here we have to rely on the eye of a talented and experienced designer. This does not exempt designers from giving some rational, but it does afford some patience and forgiveness on those who expect a very precise answer. This is why it can often help to have more than one design solution to a problem so at least people can compare one icon with anther.

Something that I deeply feel is an important part of this craft and often goes unrecognised, is the importance of standards. A big part of design is taking the knowledge of what has been found to work over what has been known to fail. Examples of this are avoid 'yes and no' dialogs and instead offer clear actions. So in design we prefer 'Delete Photo? <Delete> <Cancel>' over 'Are you sure you want to destroy all your photos? Maybe you don't? <Yes> <No>'. The same goes for graphic design and this is where some of the new iOS 7 icons are real head scratchers. It is totally legitimate to change the style of what an iOS icon should be. But some of these changes appear to go against what has been accepted as good graphic design in general. 

I stole this from http://mrgan.tumblr.com/post/53308781143/wrong

I stole this from http://mrgan.tumblr.com/post/53308781143/wrong

Session 221 - iOS User Interface Design from WWDC 2012 describes a great iOS icon as being 'Beautiful and Instantly Recognisable' as well as having a clear recognisable shape. Some of the new app icons, such as the App Store icon, have a shape that fills almost the entire space of the icon reducing how recognisable it is. This has been considered the mark of an amateur icon and to now suggest this standard is wrong is bound to provoke some serious discussion. Many people don't find apps based on the name under the icon. I bet we could arrange a test where by hacking a phone to make all the icons black rectangles, but still have their name underneath people would struggle like hell to find their apps. Similarly a shape with a decent amount of space around it is is both more visually appealing and faster to recognise. This is not an ambiguous statement, but an actual testable consequence.

I hope this was a small step to bring some clarity to why designers have issues with some of the icons in iOS 7 without resorting to 'its just my special opinion'.

 

The end of the portfolio mega stars

Don’t you sometimes long to be CEO of a company like Sony Ericsson, Samsung, Nokia or Microsoft? So that you can say to your coders, your designers, your development teams and your software architects: ‘Not Fucking Good Enough. I haven’t said ‘Wow’ yet. I haven’t gasped with pleasure, amusement or admiration once. Start again. Not Fucking Good Enough.’
— @stephenfry

Late in 2008 Stephen Fry wrote an excellent article about how the rest of industry players better step up their game. For sure at times teams may indeed need a proverbial kick in the bum. Although back in 2008 myself and many colleagues were delighted with this article it also misses some key problems that no amount of shouting or better yet, high standards, would have fixed.

How the industry looked in 2007

The dominant way to create phones at the time was with what is known as portfolio theory. The main players created tens of products every year to fit every possible niche in the market. These portfolios would then gain you a large amount of marketshare that no individual product could ever hope to gain. These products were divided up based on various different factors. Some on price that ranged from entry level cheap phones to expensive flagships. They were divided based on what they could do, be it music focused, camera focused, media focused, etc. Others were then divided up based on types of user. Be it tribes they belonged to 'young hipsters' and 'busy family parent' or personality such as 'early adopter' and 'technology laggard'.

Each of these products would then have tens to hundreds of hardware variants. This was to support everything from different types of cellular network to local laws such as the banning of GPS or cameras. These would then in turn be running many hundreds of software variants. The vast majority of these variants were just tweaks to turn off or on specific features. But they also included full interface customisations for various network carriers.

Products of the time had a wide range of personalisation options. The hardware supported changeable covers and a loop to allow phone jewellery to be hung from it. Then the software could be customised from everything to setting a home screen picture to changing the ringtone. You could even download UI themes so that all the apps and icons on the phone would fit the aesthetic of your favourite sci-fi movie or Hello Kitty.

The end result was the successful companies of the time were masters of managing the complexity of hundreds of hardware products and software versions. I cannot stress this enough, it was amazing. A management, engineering and production feat that was difficult for even those of us who were part of it to fully at the time appreciate. These companies could even juggle multiple platforms and operating systems. Back when I worked at Symbian it was not uncommon for members of the foundation to announce that as well as products based on their own OS and Symbian, they would also be working on various new phone platforms that came along. Everything was optimised towards this end. It wasn't just a theory reflected by a wide range of products but how entire companies were organised. 

Here comes the iPhone

So come late 2007 I have in my hand a product that just thumbs its nose at everything we held to be good and true. From the perspective of an industry insider the iPhone seemed unashamed in its attempts to appeal to as few people as possible. You could only buy the phone in the USA and on a single network carrier. It only came in one colour with just two storage options of 4 or 8GB of memory. The phone was essentially tethered to a desktop PC. On turning it on for the first time a graphic appeared showing a connection to a PC or Mac running iTunes was needed to activate it. Even the contract with the carrier had to be a more expensive one with a data plan. At the time it was remarked that maybe the device only had appeal in the USA, something further compounded by English being the only language it then supported.

It just seemed outrageous. Data plans were so expensive and at the time the majority were on voice plans. How could a modern product not ship with at least twenty languages supported from day one? What about people that don't have a desktop PC or Mac that would want to use a product like this? No one sane would ignore the enterprise market. Then to add seeming insult to injury it was missing countless features that conventional wisdom demanded to be in any product. 3G network support, cut-copy-paste, MMS messaging, video recording, front facing camera and video calls.

It was just mind bending. If Apple had decided to just give the product to the existing players they would have rejected it for not fitting any portfolio. There was just a single product with essentially one hardware version and one software version. Surely more had to be coming? Even personalisation was brutally limited in the first version to just selecting a different image on the lock screen. The ringtone could be changed, but only to one of a handful of built in tones that Apple supplied. Until a software update late in 2007 you could not even change the one single standard tone for new messages.

But what became clear as time went on was that many of these so called weaknesses were strengths. Like an early arthropod climbing out of the water to live on land. This product was going to carve out a secure niche before going on to eat everyone else's lunch. I plan to come back to how these so called limitations as important design learnings can be taken form them. But I want to finish on a different revelation around the user experience.

Who knows what experience your customers are having?

It slowly dawned on me that this product had to of been created by a small team and uniquely they had created and controlled the experience everyone was getting. Let me explain. With a portfolio, variant and personalisation strategy it is simply impossible for the team responsible for the overall design of a phone to also be involved in every possible version. You have to divide up the work into separate teams. This is further compounded by the reality that most employees will run the default vanilla version of the software. They won't see all the different custom versions as you can only run one variant at a time on any device. It would be a full time job just managing multiple phones running multiple versions of the same software. So you have no idea how well some of these variants even work. Thus you get a double whammy of an experience that was not designed by the core team and as a company you have become ignorant of the experience customers are having with your own products.

Although few people probably know every aspect of the iPhone inside out, it is possible to achieve this knowledge. As well as knowing this knowledge relates to all the other iPhone users out there. After all they are all running the same software on the same hardware. From a design perspective it is then crystal clear who is accountable for the user experience and it is trivial to get the same experience yourself.

But could the iPhone still be an iPhone and support a portfolio? It was pondering on this that lead to seeing how focused on being a single product the iPhone actually was. Something so different to products of the time and interestingly still unique to this day. But more on that will have to wait for a future post.

 

Raising the discussion

It is a funny old business the technology industry. Despite owing its entire existence to the greatest search for truth and knowledge we have, science, we often seem pretty damn insistent on learning nothing. So before getting to the some real learnings from the past six years I wanted to just quickly cover some of the uninteresting discussions currently ongoing.

First!!
— Internet comment thread

There is a lot of fuss over who did anything first in this industry. Who invented the smartphone? It appears we should find this party and credit them with everything that came after. It is not clear why they should get all the credit, but for sure all the products that came after should get none, right? One problem is even if there is a clear inventor of the smartphone so much of that is built on the previous generations of mobile phones. So should we find who invented the mobile phone? But then we find so much of that work goes back to the original land line system, so do we then credit Alexander Graham Bell? But what about Samuel Morse of telegraph and morse code creation that came before? We can keep on going till we are crediting whoever came up with smoke signals or maybe who ever said the first human word?

The thing that hath been, it is that which shall be; and that which is done is that which shall be done: and there is no new thing under the sun.
— King James Version of Ecclesiastes 1:9

The next issue is originality. Take an iPhone and you can chop it up and see various parts were already visible in products that shipped years before.

Dedicated graphics chips (GPU's) were being promoted all the way back in 2004. And phones were already on the market by at least 2006.

Phones from 2005 already had accelerometers.

The Webkit browser engine used by both the desktop and mobile versions of Safari and Chrome was already in phones by 2005.

Countless devices already had touchscreens, including this Symbian model from 2000.

Capacitive technology was not only in use in phones it had already  used in the iPod click wheel. 

Todays phones are crazy complex. Just trowing technolgy together does not a great product make. Products do not just spontaneously evolve from these individual elements, just as life does not spontaneously evolve out of peanut butter. Back in 2007 many talented teams were trying to utilise all these great technologies in products and we all failed to make something like an iPhone. The lessons why still fail to be learned. So in the next post lets go back to 2007 and see what was so different and see what can be learned for work we do today.