Google IO 2017

Google IO 2017

Google IO is Google’s annual developer conference held in San Francisco. This year I attended Google IO Extended which happens all around the world at the same time as the main IO event, it’s designed for people who can’t make it to the main event but want to know the latest stuff.

There was one main theme this year from Google, and it’s summed up in this phrase:

“Mobile first to AI first”

In every area that Google spoke about (from new processing hardware, home automation to Android devices) everything had been improved by AI!

Another nice fact they mentioned was, Android now runs on over 2 billion devices and 82 billion apps were installed last year.

Below are some of the big headlines!

Google Lens

A new app designed for your phone, point it at something, be it a flower, restaurant sign or a WIFI label and it will understand it, identify the flower, show the menu for the restaurant or automatically join the WIFI! It can also translate languages on signs.

They also showed a cool demo where the AI could detect obstructions (a wire fence) and remove it from the picture). This is a huge leap in computer vision.

Google Home

Google home seems to do a lot more than I realise, for instance, it can recognition up to 6 different people in a household and customise the experience for each one. Now, Google is adding phone calling to Google Home for free. Only available in the US currently, you can just ask Google Home to phone your mum for instance and will recognise who you are, and find your mum in your contacts. If your partner does the same thing, it will phone their mum, not yours.

Another new feature is visual responses, which is super cool. You can ask Google Home something, say “what’s my calendar look like today”, and Google will display it on a Smart TV, Chromecast or Google connected device. I really think this will become super useful. You could ask Google Home, how long it will take to get to somewhere, then tell it to send directions to your phone.

They also introduced something called Proactive Assistance, the idea is that Google Home will detect things that may be important to you and let you know about them via a visual light on the device, for example, if traffic is really bad and you have a meeting coming up soon.

Google home now integrates with over 70 smart home manufactures

Virtual Reality

Google already make a VR framework (Daydream) and a headset to fit onto your phone, this year Google announced 2 stand alone (no phone, pc etc needed) VR headset coming out this year and have partnered with HTC (who make the HTC Vive VR headset) and Lenovo who make their project Tango tablet (3D mapping / AR). What’s very interesting here is that they are bringing out their own indoor tracking solution that does not need external sensors. They call it VPS (visual positioning system) which I believe could be an advanced version of SLAM.

They also announced that the new Samsung S8 will support the normal Daydream VR headset, which I found odd as Samsung are in partnership with Oculus (Facebook, direct rivals with Vive) and already have the GearVR.

Augmented Reality

Google announced another Tango handset (it’s like a Microsoft Kinect embed into an android tablet) and announced Expedition, which brings AR to the classroom. Kids will be able to place 3D augmented objects within the classroom, for example see how volcanoes explode.

Suggested Sharing

Suggested sharing is a new feature for Google Photos that uses AI to detect well-taken pictures, and who is in them. It then suggests / reminds you share that picture with the people in it. It forms an online collection of all the images, so you finally get to see images with you actually in them (if someone else took them). There is also an automatic mode, for example if you always want to share pictures of your kids with your partner. Feels a little scary to me.

Cloud TPU’s

So, anyone in computing will know what a CPU (central processing unit) and a GPU (Graphics Processing Unit) is. Google likes to do their own thing and last year announced the TPU (Tensor processing units) which are designed to be very quick at machine learning processes. Google are now calling them Cloud TPU’s and each one can do 180 teraflops.

Android O

There were a few new features mentioned in the keynote but nothing I found too exciting. They mentioned picture in picture, and notification dots, both of which iOS already have. They mentioned Android Studio 3 and supporting kotlin as a first class language, again, I guess it’s their answer to Swift for iOS. There was the usual focus on battery useage, security (Google Play Protect) and making apps run boot faster. They say they have seen 2x improvements on apps running. Google has also improved Copy and Paste features so that it automatically recognises address, company names, phone number etc which in all honesty I thought it already did.

iOS Support

Throughout the presentation, whatever new stuff they demo’d they kept making a point that it’s also supported on iOS, not just Android (Google Assistant, Google Photos, Daydream etc) which I personally thought was cool.

Lastly and probably the one that made me laugh the most!

YouTube

Youtube for TV and consoles will now support 360 video including live events, Youtube viewing on TV has gone up by 60%. However, the big news is Super Chat and Trigger Actions.  

Super Chat allows you to pay for your comment (to a Live Youtuber) to be noticed, so if you really want to ask that question, you can pay for it. Not too bad, I guess. But Trigger Actions allow you to pay to trigger something in the live video, so throwing a water bomb at the presenter or turning the lights off in their house. I can see this going down hill pretty fast.

2 way communication between Lejos and Android : Work in progress

So, one of the big requirements I need from a programming language for the EV3 is to be able to talk to a mobile device.  With the EV3 being newer, less options seem to be available.

I have started work on some Lejos samples to demo 2 way communication via sockets to an Android device.  The current work in progress can be found on Github https://github.com/burf2000/LejosToAndroid 

Watch this space!

Xamarin : Results so far

So, I been playing with Xamarin on and off (damn meetings) and I am slowly getting to grips with it.  I am not a C# developer so while learning Xamarin, I am also learning C#!

I have to be honest, some things I have found a real struggle.  The Android part of my project just died, would not build for toffee, after many hours of search for info, even talking to Xamarin (work call), it turned out to be a corrupt package in a hidden directory.  To be fair, with every new tool, language you are going to learn these issues.

The other random issue which I have not solved is using OpenTk and OpenGL to render a cube on the screen.  This is completely shared code (WOOHOO) however iOS works, Android does not.  One thing I picked up from when speaking to Xamarin is that if any platform is going to break, its Android due to Google pushing updates.  I can respect that!  Android Studio 0.9 -> 1.0 broke every project for me.

The Show must go on!

I can see some real benefits from Xamarin, if I was a C# developer I would be in heaven!  I really want to see whats possible with Xamarin Forms, the holy grail of mobile development (Be warned its not!, its a good technology for a certain job, not all jobs).

In all honestly I don’t have much interest in Xamarin Native which is where you write C# code that directly binds to native Apple/Android Api’s.  I would prefer just to do it in Xcode / Android Studio.

Focus!

I think my personal focus for Xamarin is looking at cross-platform game development using MonoGame or OpenTK.OpenGL.  I either need to learn XNA for MonoGame 3D stuff or see how different the OpenGl is on OpenTK.  This however brings me to the question, why am I not using Unity3D?

My work focus is Xamarin Forms and working out how far you can go with them.  I have been told things like custom pins on a map is a no go on Forms.

I will keep you posted on what I discover!

 

 

Xamarin ???

This week, I will mostly be looking at Xamarin 🙂

On a serious note, I have been asked at work to take a look at Xamarin and see if it’s something that can be used, and if so for what.  The native iOS/Android developer in me instantly goes NOOO.  However the excited child in me always says yes!

I think I assumed I knew what Xamarin did without actually knowing, and their hugely limited free version kept the hobbiest in me away. However, starting the free trail and looking in to things, shows that actually this is could be pretty cool!  I am not ready to review it yet because I have not made my first app yet, but initial research of a few days is looking promising.

The main decision I need to make is to choose between Xamarin Native and Xamarin Forms.  Xamarin forms does look very good (Universal app) however I don’t think their website info on it is up to date compared to what is actually possible.  They basically say that anything marginally complex should be done in Native.  However if you see some of the stuff people have done, its very impressive.

I hope to do a post by the end of the week on how I got on with making a simple app 🙂

New version of Virtual Worlds Submitted for Android

Just a small update, the following features, bugs have been fixed.

  • Added feature to turn VR mode off
  • Added feature so you no longer need a game controller
  • Fixed issue with be able to place objects in other peoples worlds
  • Fixed a few crashes!

Please check out

Virtual Worlds

Developing for the Pebble Steel

I don’t generally wear a watch but for Christmas my wife got me a Pebble Steel smart watch which was exactly what I wanted 🙂  I wanted this over an Android Wear device because it works with iOS as well as Android, plus it has a battery life of more than a day!

The watch works seamlessly with my iPhone, and everything is so easy to use, access etc.  I find it funny the Pebble Steel works so well with iOS but my Google Glass does not (yes there is an iOS app for Google Glass, getting all of your notifications to appear on the Glass seems impossible.)

Developing for the Pebble is done online in their Cloud Editor, you can develop using Javascript or C.  As a iOS developer I went for C and was really impressed by how easy it was to develop in a browser and then deploy to the watch with a single click of a button!  The tutorials are pretty easy to follow and understand.  I did have an issue when I switched to using Javascript (3rd tutorial) where you have to define constants via the IDE settings area.   However that could of been user error 🙂

I think they have done a real good job from start to finish with the Pebble.  If you fancy starting development, check out Developer.getpebble.com

10897778_10152879471120033_1425203151548046278_n

 

Virtual Worlds update 0.1

So, Virtual Worlds, hmm I had planned to release that by the end of the month however a few things have come up!  Man Flu has not helped me code, the snot just goes everywhere!

I have also had a few other things on!

Excuses over, where are we?

  • Started getting a better set of textures together thanks to Gary
  • Made a very rough simple player model which maybe be replaced by a OBJ file
  • Fixed lots of bugs
  • Created a control screen showing controls etc
  • Improved the navigation throughout the app
  • Implemented some speaking help notifications
  • Got Google Cloud Messaging working however not sure why I need it now.
  • Moved to Live server!

Left to do by 1st December

  • Improve player object
  • Finish texture pack off
  • Make scalable zone object
  • Fix any controller issues
  • Allow user interaction maybe using GCM
  • Allow players to fly in zones if they own them to help build them
  • TEST

Wish me luck!

Virtual Worlds : Progress so far

What? a? Virtual Worlds? What a sh*t name?  Your probably right but its just better than Android 3D mmo thingy!

Progress so far?  I would of liked to give it more time but some times I am not in the mood, sometimes I can’t get out of the mood which also leads to lack of sleep!  Last night I had beer! (not a usual thing sadly)

What can you do so far?

  • Register, login
  • Navigate the main world, which will allow you to enter player zones and create your own zones.
  • You can see other players (as cubes) move about
  • Once in your own zone, you can place cubes to make structures (see below) this has been vastly improved.
  • You can climb your structures to make bigger structures (I had to make a stair case to to make the archway)
  • You can enter delete mode in your zone and delete things with ease
  • Started adding back in the voice engine, sound effects etc
  • There are now system text messages to show you important information
  • Players and zones have text above them showing their username

device-2014-11-15-231021

 

 

 

 

 

 

 

Whats next

So to get it to the point where I can alpha test it I would like to do the follow

  • Create a 3D model for players as they are currently cubes
  • Make a 3D object for a zone, this should grow as the zone gets more popular / better in some way
  • Find a wider range of textures snow, grass, wood, etc
  • Create a control help screen to list gamepad controls
  • Make all commands voice activated (may get dropped)
  • Allow you to send messages to users (may be text or voice)
  • Create some sort of reason to play the game, prices, score boards etc.
  • Implement GCM (push notifications)

The aim is to have a demo zone thats setup for xmas when I release alpha:)

Off to bed!