The metadata, that sits in every tweet contains no locale definition. If it did, Twitter clients (including Twitters website) could filter tweets by language, and hide me from Chinese, Spanish and other very nice languages … that I nevertheless don’t understand.
Update: As George Hahn points out in the comments, Twitter basically supports tweet languages. This means apps like Tweetbot or Osfoora could give options like “don’t show Tweets that are tagged as Chinese or Japanese”. /Update
Putting in the language would then of course be an additional hassle for multiple language tweeters, but effectively it would do no more harm that what is done right now to the unprotected eyes of usual people who want to try twitter but might be scared away if they get all the english retweets – while they don’t speak english. Then also applications can try to auto recognize if I tweet in german or english (which might not even be that hard) and correct me if I didn’t put in the language manually.
I’ve been to two Windows 8 trainings from Microsoft in the last few weeks, as we at Abelssoft build Windows software for consumers and Windows 8 will be a pretty big market, as it will ship with every new PC sold from Octobre on. I’m not sure how much I can tell about this, so I’m keeping out stuff where the speaker directly stated to not tell the public, but basically all I’m telling has already been leaked before or is directly stated on Microsofts MSDN pages at http://dev.windows.com or http://design.windows.com. Also I’m not from Microsoft, so take everything written here with a grain of salt.
It was presented that we now have about 600M Windows installations, and googling just found that half of them is on XP, 6% Vista and the rest Win7. I guess the Vista users, some of the XP users (XP support is over) and everyone buying a new PC will have Windows 8. Everyone of those people will see the windows market and everyone who tries to use the Metro interface will very early need to have a live ID, which enables them to enter the windows market. The tablet users will only have the Metro interface as the only desktop apps working will be the office products Microsoft preinstalls.
So the market will be big. Very big. For Metro apps at least. With Windows 8, every Metro app that Microsoft thinks to be a good design example for Metro will be shoved into a large amount of people’s eyes.
The Metro Design Language is nice, clean and minimalist. I like it. On design.windows.com everything needed is shown, including design decisions, metrics and what you need to design a Metro interface app.
Frameworks and the Market
Most of the other technological stuff that was mentioned is already known to everyone who looked at the Apple store concepts. For example it’s the same mechanic of declaring which permissions your app needs to work, if it needs to have access to your location, sensors, etc. Microsoft here copied from Apple where it made sense, which is basically everywhere.
Interesting differences include:
The Microsoft cut is 30% unless you app is very successful – as soon as it reaches a certain threshold in earnings (the number 25.000$ appeared somewhere), Microsofts cut lowers to 20%. This is a very nice move from Microsoft, although it won’t affect most of the apps it keeps the hopes of developers for getting rich high. As far as I know, this is counted on a per-app basis.
Trials. There’s a (feature-limited & time-based) trial mode, recurring payments (subscriptions), in app purchases – including a mock windows store for testing.
Not as strict as Apple. I questioned if we would be able to harvest the users email to give him an otherwise free app, I got no definitive answer but the basic message was that Microsoft won’t stop you from using the business models you’ve used so far. They only said that the windows store team will check your app, and if they cannot get in or see the functionality based on a trial version, they will likely reject it. From a user perspective, I don’t like it so much, but from a developer’s perspective, this is good news.
Synching is easier. Everyone who can buy stuff from the store automatically has the credentials to use Microsofts Dropbox Skydrive. An SDK for this is provided to work with it in code. This means that synching will be really easy AND cross-plattform (as you also have Skydrive for Mac, iOS, Android and such), which on Apple’s platform iCloud is designed to lock people in the Apple world.
The app sandbox will be one-sided. The Metro app has only access to a few places on disc, but from the “old” Windows apps you’ll be able to scan through the (hidden) app folders of the Metro apps and theoretically influence them. This has an impact on security considerations, as you cannot openly put private user data on disc, but it also means that you can use file-based communication to communicate with your “old” desktop apps and services. The implications of this could be manifold. For example, you could implement a way for Metro apps to recognize if there’s already a full version of the apps’ desktop equivalent is installed to automatically make the trial free for the user.
No system database will be provided. No core-data for Windows 8 now. You can use file-based databases in your app, but that’s it for now (SQLite and one other thing I’ve never heard from are supported).
.NET 4.5 is something I’m really looking forward to. .NET 4.0 already is a heavenly programming language from the future, and .NET 4.5 will be another evolutionary improvement. The asynch language feature automatically makes your synchronous spaghetti code asychronous. Which is not only big for your code, but should also make the whole framework faster, as Microsofts talented engineers provide you with asych representatives of usually blocking code.
In the developer and designer trainings is was oftentimes emphasized that you more or less automatically have a valid metro design when you use the new grid-based application template, as it scales and reformats contents automatically based on device and portrait/landscape orientation. I got the feeling that this was stressed a lot as Microsoft doesn’t think people design good UIs. If I look at the Windows world, this is mostly true.
The semantic zoom is another big paradigm that will be greatly supported by the grid. The idea is that you can zoom out and in on everything so you have only a single view in your application that shows different levels of detail,depending how far zoomed in you are. You can see with in the Windows 8 start screen, where you can zoom out to have an overview of all your icons, while zoomed in you see interactive tiles and program names.
One other interesting thing is that SVG graphics might be usable, which is a feature I wished for a long time ago. Sharp graphics without big file sizes should be good for everyone.
The Developer and Designer Trainings
Overall, I would have expected more.
First, I was expecting more non-official information. The non-official information I gathered was merely what everyone would expect to happen anyways.
MS’s own stance on details isn’t very universally consistent. The to trainers I met didn’t have the same knowledge about things, and some information was even contradictory.
The number of trainings and the overall low number of participants oftentimes let me think that Microsoft is desperate to train developers to make something else than the 10 year old windows forms that we all know and hate. In a non-representative questioning of the audience, only about 1/3 of the attendees knew about MVVM, which is the de-facto technical standard design pattern for modern Windows applications since about 2007-2008. This means that most of the Windows developers are about 5 years behind in knowledge. Maybe this is the reason why the trainings didn’t dive very deep.
The early beta-like release of code isn’t very good to test. While I like the idea of the semantic zoom, the code base is buggy so it can hardly be tested now. On a sidenote, of course it’s still better than not releasing stuff beforehand.
The User’s Point of View on Windows 8
Users will see Windows 8 very differently than developers. In my opinion, you need a desktop and different windows for real work, while having the minimalist full screen experience for tablet PCs will be the primary way to go.
Having seen Windows 8 weeks and having played with it a lot, in the developer trainings I learned a lot of hidden gestures that a user won’t understand. Asking for Microsofts plan to make the user understand the new interface, I didn’t get much more than “Yes, we’ll have to do something about that”. UPDATE: Windows 8 is out now, and they didn’t.
Therefore I guess that many will be puzzled, and that many will stick to the edition of Windows they currently have. In fact, I even think that Windows 8 will be Metro’s Vista – an unfinished product that lays the groundwork for a really great Windows 9. Sadly, no Microsoft representative was going to talk to be about Windows 9.
UPDATE: Microsoft screwed it
1.) Discoverability: While working with Windows 8, I noticed that there’s no indication if there’re options and if the charms work on this page of the app. In Microsoft-theory, every app should implement a way to share stuff, the search charm etc, but if this isn’t implemented or there is just nothing to share or search the charms don’t work. I would have expected that in the final implementation they’re greyed out, or hidden, if thats the case, but it isn’t.
In terms of discoverability, it would be better to have a visual indication that a search or a sharing charm is available, as when you want to share or search something, you don’t want to check if thats possible at the moment. Same problem with the lower options bar – you can never know if there are options unless you find out on every page of every app. This will lead to users testing out swipes and strokes all the time.
This makes Windows 8 a bad tablet OS.
2.) Touch on Desktop: Soon the Surface Pro will be the first real Windows 8 device where you might want to try to do some work on a touch device. Microsoft promised to make the old Windows Desktop more “touch-friendly”. This is necessary, because it’s where you’ll have to do work, because the Metro part of Windows 8 with it’s one-window open approach won’t work for most people. What happened? They increased the padding on the Ribbon interface. That’s it. Users who really try to do work in Office on a touch screen will often miss their touch targets.
3.) Two worlds: The distribution of settings you want to make in two different system settings are deeply frustrating, when you cannot find the setting you look for – because it’s in the other settings. When you wanted to drag a file in Mail, you can’t now. When you wanted to look at a Wikipedia page while writing something for reference, know you can’t. (Well, you can, but then you’ll try to have to ignore the Metro “Windows 8 – Sytle” part of Windows, which makes Windows 8 a worse operating system than Windows 7).
2. + 3. makes Windows 8 a bad Desktop OS.
TL;DR: Windows developers will finally have to learn something new, as the market will be big and I believe that the Windows AppStore will initiate a new gold rush. Windows 8 is technically a very nice concept, but the trainings were a bit shallow and Microsoft seems to be desperately hoping to find developers who will learn all the new stuff their brilliant engineers have been working on in the last 10 years, that nobody used because of Microsoft’s backwards compatibility (the old shit still runs), and devs who will look at MS’s style guides. Users will like Win8 for tablets, but not for PCs, where it will not be too successful before Windows 9.
With Mac OSX Lion, the default setting for scrolling in Lion is that you don’t move the scrollbar, but instead you’re moving the content. This means the scrolling direction is inverted. Many people said they didn’t like this, but I guess they’re not using the touchpad – I grew completely accustomed to it within about three days, because it really feels more natural, especially when the scrollbars are hidden. Touching the webpage and moving it around feels more as if you’re in control.
But Apple won’t stop here. If you’re looking at Safari’s new way of moving back in the history by moving the active page to the right (effectively scrolling to the left further than possible) so that the last page in the navigation history appears below the active page shows where this could lead. If you try this left and right-scrolling in iCal and do it really slowly, you get an effect like in iBooks where a new calendar page slowly flips over. This kinda semantic way of moving things is also followed with mission control, where you push everything away from you (four-finger-swipe up) to get an overview of everything running and do the opposite to get back close to the windows. I believe this kind of semantic movement of windows and content will sooner or later work in a lot of menus, the finder, the AppStore and anywhere else where “back” usually would be a button.
If this is the course of the OS, I wonder why Apple didn’t go further with this. In the new iCal, you can move forward and backwards with this new side-scrolling. So, if you move the content (for example the month August on a sheet of ‘paper’) to the left, on the right side ‘September’ slides in. In my opinion, this is exactly the natural way it should work. Why then, if you use three fingers and make the same gesture (three finger swipe to the left) it moves the other way round, back to July? Because three finger swipe left is defined as ‘back’. Putting the ‘back’-command on three-finger-swipe right sounds silly, but I think this is the way it should work as you’re always moving the content to the right when you’re going backwards, and vice versa. This would also give a sign to third party applications like Twitter, where in a conversation the same confusing three-finger-swipe to the left actually moves the content to the right side to return to the stream.
What I would like additionally is a three-finger-down gesture for minimizing or closing a window, or better some way to define gestures as triggers for actions in programs, as possible in BetterTouchTool (with that you can remap and define new gestures for touchpads and magic mouses).
When I switched from Windows to OSX, it bugged me a lot that my favourite browser had a serious problem on OSX: When I used the red X or a swipe gesture I had defined with “Better Touch Tool“, Firefox seemed to close but it didn’t reinstate all the tabs I had open when I restarted it. I found out the reason for this lies in the way that OSX handles closing of a program. It just closes the Window, but doesn’t close the program itself, I guess because it will restart faster if you activate it again then. Anyway, using the red X means to Firefox “close the window, but don’t close the program”, and therefore it doesn’t realize it needs to save the tabs for next start. Mozilla says this is no bug of Firefox, it’s just problematic how the Mac andles programs.
The workaround I use now is the Firefox Addon “Session Manager“. In it’s preferences, you can set “Close Firefox on close of the last Window” and “On close: save current session” and “On start: choose session: last session”. This let’s Firefox always open with the last session, that means with all the tabs you had open before you closed it. Happy Maccing, Mr Firefox. Also works with 4.0 btw.
So as I was confronted with “you should learn to use the command line” again, here’s my opinion on this invention. The command line is the prompt where you can type in commands and those get executed by the operating system. It what Linux people need to use a lot, Mac people had to resort to when nothing else worked and Windows people didn’t need to use ever – as the command line never was very powerful on Windows.
In my opinion this is also what directly impacts the platforms distribution. Windows 85%+, Linux 1% and the rest is Mac. But why should the distribution of a platform be directly connected with the platforms use of the command line? The first barriers seem to be easy to overcome.
First, you need to be able to type and grasp some basic concepts, like the way the file system is laid out, which types of other stuff in the operating system is mapped into the file system, and basic usage principles. This alone is enough reason most users won’t bother with this ever. Typing is strictly not what most users want to do, they want to surf the web, listen to music, watch movies and when typing is only needed for messenging or typing in profile information in Facebook.
Well, normal users stink anyways, you say, and as a programmer you should be comfortable with typing on a keyboard too. So let’s look at the positive aspects of the command line: it’s all just typing, you don’t need to take away your hands from the keyboard to use a mouse and thereby it’s all faster anyways and also, there are so many commands with so many fine grained parameters that you can do basically anything using the shell.
This is true, and I don’t want to argue with anyone here. Surprise! I don’t think the command line is inherently bad. Instead, I think it’s an expert system. If you have invested years and years learning commands, parameters and when to use which parameter and which parameter won’t work with another one. The leaning curve is steep. You’ll need to learn the use of every single command that you’ll need for every single of the the most trvial of actions. You’ll need to know that there is this command, and how to use it, you’ll need to learn with parameters do what and you’ll ask yourself who decided on the defaults parameters or the absense of them and you’ll read man pages over man pages for that.
So if you did that, congratulations, you have aquired a valuable skill. At least as long as there’s no good GUI based solution for that. But there’re more problems with only working in text mode:
You type, the program executes and you get output. The sequential cycle of this dance has no possibility to give you additional information while you type or while the program executes. When it dumps a lot of text on the screen, you scan it for what you need to know, and then you type and wait again. For a lot of problems this won’t matter at all. Sometimes it’s even faster, for example when you grep for a certain file you’re searching for. But in the shell, there’s no mouseover, no intuitive visual design that could give you clues what to do and how to do it, no spelling correction when you mistyped a character in the middle of a three line-wrapping command. Type and execute.
Don’t misunderstand me here: I wish I would be a master of the command line. As the myth of doing command line magic is what let’s us admire those who can. The basic thought in every person who needs to use the command line is always “damn, this is hard” and “it’s take forever till I get all this” and “I guess I have to learn all this stuff”. This makes us admire those who CAN do all the stuff and know all the commands even more.
But times are a’ changing. The first really well usable wide-spread operating system that you could accomplish pretty much everything with was Windows XP in 2003. In all other systems you always had to be mastering the command line to be able to do what you really wanted to do. It’s 8 years since, and a lot has changed. GUI programmers got better at what they’re doing and they’re giving ordinary users the power to use the computer. Windows 7 is a very nice OS, and MacOS gets better with every release.
To me, command line commands are a basic foundation that you can base an operating system on. But commands are badly designed by default, and therefore you should never have users be dependent on them. Never ever. And as it’s not 2003 anymore, you cannot expect power users to know or even aspire to know the command line anymore.
I can do work in the command line when I need to, but I’m always very slow at that and I’d always rather have a GUI based tool at hand, that shows me instantly what to do and how to do it instead of Googling and guessing – and for my part if something cannot be done conveniently without the command line in an OS, I think the OS is badly designed and needs work.