bloggeek

Subscribe to bloggeek feed bloggeek
The leading authority on WebRTC
Updated: 1 hour 55 min ago

How Can Localized CPaaS Players Thrive?

Mon, 02/27/2017 - 12:00

Who really knows?

Local or global for CPaaS?

In the past two months I’ve been conducting briefings with a lot of the CPaaS vendors out there. Some of them are now being added to my Choosing a WebRTC API Platform report that is getting a refresh next month.

During these chats, I came to realize a relatively lesser known type of CPaaS – the localized one.

To me, it was almost always about two types of players: you were either a “telco” or a global players. Telcos were based in their geography of operation, both limited and protected in their environment due to rules and regulations. Global players were the pure XaaS cloud vendors. I even created a nice graphic for that in 2013, comparing telcos to OTTs:

What we see now are mainly UCaaS vendors who are trying to join the CPaaS world. Put simply – vendors offering phone systems as a service to enterprises adding APIs for developers.

There are two reasons why these vendors are local in nature:

  1. Their offering is still tethered to a phone number for the most part, making them a kin to the telco
  2. They prefer it that way, probably because of where most/all of their business is taking place

For UCaaS, this localization might not a real issue. Corporations are still thinking locally, even the global ones. You work with people. And use phone numbers, so you tend to use the “law of the land” in every country. From how you allocate phone numbers to what exactly BYOD means at that country.

The problem starts when you want to serve one of two markets as your customers:

  1. Large multinational coroporates, whose footprint is global
  2. Small enterpreneurial teams who are spread out globally

That first one always existed. But I believe there are more of them today – smaller sized companies are reaching out globally, making them “multinatoinal corporations”.

The second? More of them as well, but probably focused on specific industries – high-tech comes to mind immediately.

What changes when moving to CPaaS while staying local?

First thing you’ll notice with these vendors, is that while they may be using AWS for their data centers and are offering WebRTC connectivity and VoIP, their distribution across AWS data centers ends up looking something like this:

This means that WebRTC calling is going to be affected – and not for the better. If I need to get a call going in France (2 people in Paris for example), then they get connected via US East – maybe even rounting their media through US East. Which lends itself to a bad user experience.

For UCaaS we might not care, as this would be an outlier – but for CPaaS?

The difference now is that we are API driven in what we do. We build processes around our company to offer programmatic communications. And these tend to be wider spread than the local mindset of corporate communications.

Are we using CPaaS to enable our customers to interact directly with each other? Are our customers as local as our own company is?

The end result

In most cases, the end result is simple. CPaaS is there as an API initiative for the UCaaS vendor.

As companies learn the importance and strength of integrations (see Slack and many of the new startups who offer B2B services and capabilities), they tend to offer APIs on top of their own infrastructure. One that their customers can use to integrate better. This makes CPaaS just that API layer.

In the same way that a movie rating company would offer an API that exposes its ratings, a UCaaS vendor exposes an API that enables communications – and then coins it CPaaS.

Only it may not really be CPaaS – just an API layer on top of his UCaaS offering.

The business models here are also different – they tend to be per seat and not per transaction. They tend to rely on the notion that the customer already paid for UCaaS, so no need to double dip when he uses CPaaS as long as he does that reasonably.

Does this make it any worse?

No.

It just makes it different, but still something you’d like to try out and see if it fits your needs.

Is this confusing? The whole UCaaS/CPaaS area is murky and mired with doubletalk and marketing speak. It is really hard to dig deeper and understand what you’re getting before trying it out.

 

Tomorrow I’ll be starting out a short series of emails about the decision making process of build vs buy – building a comms infrastructure versus adopting a CPaaS offering. If you aren’t subscribed to my newsletter, then you should subscribe now, as these emails will not be available here on the blog.

 

The post How Can Localized CPaaS Players Thrive? appeared first on BlogGeek.me.

When CPaaS Target Enterprises. 3 Different Approaches

Mon, 02/20/2017 - 12:00

All paths lead to the enterprise.

My report was titled Choosing a WebRTC API Platform vendor. Later, I leaned towards calling it WebRTC PaaS. Last year, a new industry term starting to get used a lot – CPaaS – Communication Platform as a Service. These in many cases will include WebRTC, which leads me to call the vendors in my report CPaaS from time to time.

I am currently working on the 6th edition of my report. It has come a long way since it was first introduced. A theme that has grown over the years and especially in the past several months is the way vendors are vying towards the enterprise. It makes sense in a way.

Here are 3 different approaches CPaaS vendors are taking when they are targeting the enterprise.

#1 – Special Pricing

The current notion that WebRTC is free. This in turn leads vendors in this space towards a race to the bottom when it comes to pricing. This leads vendors to look at other alternatives, bringing them towards the enterprise.

Why?

Because enterprises are willing to pay. Maybe.

This special pricing for enterprise means they pay more for a package that may accomodate them a bit better.

Check out for example Twilio’s Enterprise Plan. It starts at $15,000/month, offering more control over the account – user roles, single sign on, events auditing, etc.

It is a great way to grow from the penny pinching game of all the platform users to some serious numbers. Only question is – would enterprises be willing to see the extra value and pay premium for it? I sure hope they do, otherwise, we may have a big problem.

#2 – Extending UCaaS to CPaaS

The second set of players viying for a place in the enterprise CPaaS game? The UCaaS players.

If you do UCaaS, Unified Communication as a Service, then you have an infrastructure already that can be leveraged for the use of CPaaS. Or at least the story says.

Who do we have here?

  1. ooVoo, went from running a communication service towards a developer play (not an enterprise one mind you), only to shut it down later
  2. Cisco Spark, coming up with its own API
  3. Skype, looking at developers from time to time, trying to get their attention and then failing to give them the tools they need
  4. Vonage, with its acquisition of Nexmo
  5. RingCentral, adding a developer platform
  6. Vidyo, to some extent, with VidyoCloud and Vidyo.io

The challenge here is the change in the nature of the business and the expectations of the customers. Developers aren’t IT. They have different decition processes, different values they live by. Selling to them is different.

#3 – Simplifying and Customizing

There are those who try to simplify things. They build widgets, modules and reference applications on top of their CPaaS. They customize it for the customer. They try to target customers who aren’t developers – assuming that enterprises lack the capacity and willingness to develop.

They augment that with system integrators they work with. Ones who speak and talk the language of the enterprise. Ones who can fill in all the integration gaps.

This is a slow process though. It is elephant hunting at its best – a slow process compared to the CPaaS game.

Where does this all lead us?

There is no one-size-fits-all.

No silver bullet for winning the enterprise.

But there are a few approaches out there that are worth looking at.

For me? I am just looking at it from the sideline and documenting this process. It is part of what gets captured in my upcoming WebRTC PaaS report – these changing dynamics. The new entrants, those who exist and the progress and change that others are experiencing.

 

Want to make the best decision on the right WebRTC platform for your company? Now you can! Check out my WebRTC PaaS report, written specifically to assist you with this task.

 

The post When CPaaS Target Enterprises. 3 Different Approaches appeared first on BlogGeek.me.

The 3 Schools of Thought for Creating a WebRTC App

Mon, 02/13/2017 - 12:00

Different people think differently when it comes to creating WebRTC apps (obvious? maybe).

I’ve seen it all. I talk almost daily with different stakeholders in companies that end up adopting WebRTC. A developer here. An enterpreneur there. A product manager. A marketing lady. A PR agency guy who knows shit about what he is promoting. A test engineer. An architect. The sales guy. You name it.

There’s a HUGE discount on my Choosing a WebRTC API Platform report now. And it is valid only until the end of this month. Why not grab your own copy of it?

If you go and ask the average Joe who is doing something with WebRTC how exactly does creating a WebRTC app, you’ll get one of 3 different answers. You see, at the end of the day, our small market can probably be pushed into 3 neat drawers, each representing a different school of thought. 3 different set of values and world views.

These worldviews can be found within the same industry niche, serving the same customer base, using similar business processes and user experiences, and yet each vendor will explain in detail why his way of creating that WebRTC app makes the most sense. And it does. It truely does.

I mostly agree with my buddy Chris Kranky who wrote about build vs buy of WebRTC apps last week. His approach is simple – why invest in your own infrastructure to cut down ridiculously low monthly plans of a CPaaS vendor? I see it as well with those approaching us at testRTC and then running away with cold sweat because they need to pay a 3 digits figure to stress test their product. But I digress.

Back to the schools of thought.

I think it is important to understand your own behavior and that of your team before moving fotward, as this may hinder your decision – or at the very least explain it. What you want to aim at here is to find a good match between your strategy and your team but also between your team and what you are trying to achieve. In many cases, I’ve seen failed attempts and increased risk because the choices made just didn’t make sense with the market realities.

It is similar to my article recently on a development path in WebRTC just looking at it from a slightly different angle.

Let’s see what these alternatives are:

#1 – The Tinkerers

“Look what I found in the Chromium Issue Tracker”

When it comes to WebRTC bugs, Tinkerers know the current bug status in the Chrome Issue Tracker by heart.

They want to build stuff with their own bare hands, sometimes forgoing even open source frameworks and doing things from scratch. Our industry has a few of these and their names are quite known (Fippo anyone?)

If you’ve got a real Tinkerer in your team, then you’re in great luck. There are few of them out there – and even fewer who understand what they talk about and how to make real use of WebRTC.

The main challenge here is that you need to have more than a single Tinkerer. If she leaves to work someplace else – what are you to do then?

There are teams of Tinkerers out there building great products with WebRTC. What is interesting, is that these are the teams that get acquired for the technology they end up with. These are AddLive, Screenhero, BlueJimp and to some extent even Beam.

#2 – The Owners

“How can I rely on someone else with my infrastructure? I must own it to be able to resolve issues and control my own destiny”

And still you go place your service in AWS.

Owners like to own stuff. They need control over it. They are just fine paying others to build it, but they want to own and control the finished product.

While I value ownership, I think it is something that needs to be questioned each and every time. Is that piece of technology really core to the business? Is it where innovation and barriers of entry to your market is found? If not, then you should probably rent and not own.

On the other end, ownership is sometimes necessary, and the reasons vary from case to case. Here are a few good ones I’ve heard:

  • No CPaaS vendor covers our geography
  • Regulation in our industry mandates it. Our own customers are “Owners” themselves
  • We have this critical feature no one has that forced us into building our own infrastructure
  • Our industry is crowded and competitive. We need any advantage we can get
#3 – The Dreamers

“Collectors of wood art need a place to meet”

I have a dream. And in my dream, I see an untapped market. A need that isn’t answered. I am clueless about the technology that gets me there, but I am sure that will solve itself.

Many of these around. Especially now that WebRTC is with us. The reason? WebRTC lowers the barrier of entry for new players. And the best way of getting there fast is by employing CPaaS.

The Dreamers focus on their target audience. That niche market, where a problem needs to be solved. In many cases, they come from that market.

A dancer with cancer, trying to find a place for women suffering from cancer to dance from home.

Psychologists building an online group therapy service.

Teachers building education services for a very specific target audience.

You’ll find these in the verticals – places where communications is a part of the service – a feature within another service.

To You Now

Did you find yourself in that list somewhere?

Are you a Dreamer trapped inside an Owner school of thought because of the limitations of CPaaS vendors?

Are you striving to be a Tinkerer but don’t have the workforce for it?

How are your intents and company DNA match with your school of thought?

I am seriously interested in it, so leave a commend or email me about it.

 

Want to make the best decision on the right WebRTC platform for your company? Now you can! Check out my WebRTC PaaS report, written specifically to assist you with this task.

 

The post The 3 Schools of Thought for Creating a WebRTC App appeared first on BlogGeek.me.

Microsoft and the WebRTC Edge Case

Mon, 02/06/2017 - 12:00

Microsoft getting their act with WebRTC Edge support is far from a fashionable delay, saying more about the future of Edge than the future of WebRTC.

Want to run WebRTC on anything? Check out my free WebRTC Device Cheat Sheet.

Last week, Microsoft officially announced their intent to support WebRTC 1.0 (whatever that is exactly). I skimmed over that piece of news, but direct questions from a couple of my friends about my opinion, along with short chats with them on various messaging services made up my mind to actually write something about this.

Let’s begin from the end – nothing changed under the sun. Microsoft Edge supporting WebRTC is a non-starter for most.

Now for the details.

The Microsoft’s WebRTC Edge announcement

The title of the announcement? Introducing WebRTC 1.0 and interoperable real-time communications in Microsoft Edge. 1.0 surpasses interoperability in the title, while interoperability (with Chrome) surpasses anything else in that announcement.

  • This release is still in beta, the Windows Insider Preview, making its way to a stable release in the upcoming Windows 10 Creator Update – taking place sometime in “Spring 2017” – somewhere in Q2
  • It seems that this release won’t support the WebRTC data channel, as Microsoft is more interested in getting its Skype asset to work first – making Peer Connections and ineroperability a lot more important
  • On the video side, the version supports both H.264 and VP8, making the old codec wars moot (until Apple decides to join the fray)

Microsoft is doing their damnest not to mention Chrome in the announcement – and they have succeeded. Barely. The annoucement mentions “video communications are now interoperable between Microsoft Edge and other major WebRTC browsers and RTC services”. I wonder who are these major WebRTC browsers. I wonder even more what “RTC services” means here. Maybe the ability to do a Facebook Messenger video call from an Edge browser to the Facebook Messenger iOS app?

They failed in not mentioning Chrome by mentioning specific bandwith estimation algorithm that Google is using: “Support for Google Receiver Estimated Maximum Bitrate, goog-remb”. I have to wonder how much legal fought over this one getting into an official blog post on Microsoft’s website.

The end result? Edge will soon have video interoperability support with Chrome and Firefox when it comes to WebRTC.

This is great, but it changes nothing.

Edge and Market Share

Somehow, a friend of mine actually thought Edge is a real thing. Until we started searching for recent market share indication. The most recent stats I publicly found are from November 2016, and they place Microsoft Edge at 5.21% – “Trailing Microsoft Edge was only Apple’s Safari browser, with 3.61%, and Opera with 1.36%.”

NetMarketShare places Edge even lower at only 4.52%:

Great.

While Windows 10 has grown in adoption, Edge hasn’t.

Things are now getting desparate. In my own Windows 10 laptop, Microsoft is now pushing “ads” about how Edge is faster than Chrome and how great it is – enticing me (unsuccessfuly) to try it out.

This is happening to everyone NOT using Edge apparently, with some suggesting how to stop this Microsoft pushing you to the Edge.

The Enterprise Urban Legend

Microsoft reigns supreme in enterprises. There’s no doubting that. But here’s the thing – the browser of choice there is Internet Explorer. Not Edge.

Many of these enterprises use Windows 7 today – NOT Windows 10. So the IT guy in the enterprise sees the following choices in front of him when he needs to decide on a major upgrade:

  1. Switch to Windows 10
    • While doing that, continue using Internet Explorer 11
    • Or go for HTML5, and standardize around Chrome, Firefox, Edge – or all of them
    • There’s a hybrid option of IE11/Edge which isn’t that fun, and as far as I am aware, isn’t popular
  2. Stay with Windows 7, but shift away from Internet Explorer 11
    • Microsoft’s Edge browser isn’t available there anyway, so that option is not possible
    • So you have to go for HTML5, and standardize around Chrome and Firefox

A smart IT person, will decide on a project that makes the switch in stages. Taking him to one of these two routes:

  1. Switch to Windows 10; later switching to HTML5
  2. Switch to Chrome/Firefox; later switching to Windows 10

The common denominator? Use Chrome/Firefox and NOT Edge. Which is why most end up forgoing Edge. That and the notorious reputation of IE that is tarnishing Edge.

From a friend who works in front of such large enterprises, I am told that most are asking either for Chrome/Firefox support or Windows 10 with… Chrome/Firefox support. There’s no requests coming in for the Edge browser.

In the enterprise, it is either IE11 or Chrome/Firefox these days.

What the Future Holds?

From day 1 of WebRTC, it seemed obvious that out of the oligopoly of 4 web browsers, two are going to be adopting WebRTC (Chrome and Firefox) and two will need to be dragged towards adoption (IE and Safari).

Microsoft decided to kill IE and focus on Edge. It also decided to throw the towel on ORTC and adopt WebRTC. The reasons are rather obvious – when you lack market share, you need to follow the trends. It tried taking the higher ground with the better ORTC design, only to fail and get back in line and now introducing WebRTC.

Apple… who knows? They hire people. They commit stuff into WebKit. They have people in the standards bodies. Will this mature enough in 2017 for an official release? Maybe. Probably. I just don’t know.

As always, before you make the decision on what to support – do an investigation of your target users and what they are using. You might be that outlier whose users are that 5% using Edge…

If you need WebRTC to work for you, you’ll need to understand how to get it running on any device and browser. My WebRTC Device Cheat Sheet is still as relevant as ever. It’s free, so go ahead and download it.

Get the cheet sheet

The post Microsoft and the WebRTC Edge Case appeared first on BlogGeek.me.

How to Choose a WebRTC Development Path?

Wed, 02/01/2017 - 12:00

There’s an underlying theme to questions from those approaching me. They most often than not consider what to pick as their WebRTC development path.

Before we continue – there’s a huge discount on my WebRTC PaaS report this month – so make sure you don’t miss it.

About 10 years ago or so, a large LCD manufacturer came to the company I worked for. They wanted to build a monitor that has an embedded video conferencing endpoint in it. It was revolutionary at the time. Beautiful. Agressive. And we won the project. That was when the hard work started – we had to pick out a team to build it.

We had some of the best VoIP developers out there at the time. Our business unit licensed the technology to everyone and anyone who did anything with VoIP, so it made sense to self develop. And yet… we found ourselves hiring around 8 more developers for this project. All with skillsets different than the ones we already had. We were missing in the domain of media processing, codecs and hardware.

In almost any case, when it comes to WebRTC, the skillset you already have in place will not be exactly what is necessary to get the job done.

You will either have to hire the skillset, teach your current workforce new tricks or outright outsource the work.

Which means that when you start thinking about a product that needs real time communications, there are usually 2 routes you can pick when it comes to the WebRTC development part:

  1. Use in-house developers
  2. Use an outsourcing vendor

And in them, there are 2 additional aspects to decide upon:

  1. Develop and maintain your own infrastructure
  2. Use a 3rd party CPaaS

This all leads to the following WebRTC development matrix:

There are 4 classic development paths that I see companies take with their projects.

#1 – Hard Core

The Hard Core development model is all about control and core competencies.

At the extreme, these vendors will tend to “live off the land” and go as far as building their own SFU from scraps of code they cobble up around the web.

What they see as imperative in front of them is to be the best at communications – and to rely as little as possible on others.

For the most part, these would be companies with existing VoIP heritage who make the plunge towards a WebRTC project. At times though, they can be brand new to VoIP and for them, it starts as an adventure of sorts.

Another vendor type in this space would be the NIH type of a vendor. Where the Not Invented Here synrome is high, and trust on others to deliver what is needed is non-existent. These guys know better than everyone else how to put this type of an infrastructure in place and no explanation would deter them. Their best argument? CPaaS is too darn expensive when you start counting the minutes. And I already said – WebRTC is… free.

#2 – IT Project

The IT Project development model is about ownership.

The end game here, is to own the infrastructure and the data flow. Companies will tend to go to this approach if they have a regulatory issue they need to deal with (such as “data can’t travel out of the country”), a need to serve a specific geography (China for example) or special requirements that translates into the need of using their own infrastructure since others won’t support it (real time video analytics in the backend comes to mind).

As with the Hard Core players, they will be those who simply fancy the idea of ownership.

Why is this an IT Project then? Because an external outsourcing vendor is taken to develop it for the company. Usually because the internal workforce doesn’t have the experience, overbooked, or just more expensive.

#3 – Integrate

The Integrate develoment model is about getting things done.

You have the developers in place. They are already building your product/app/service/whatever. And you direct them to a CPaaS vendor of your choosing to work on top.

Why? Because communications isn’t your core business. Because it is cheaper. Because it gives you faster time to market. Have a pick of your reason for it – there are quite a few.

#4 – Agency

The Agency development model is about the dream.

You have a dream. You want to get it done. But you don’t really have the experience or the manpower. But you have some budget for it. What do you do?

Find and outsourcing vendor to build the product for you. Use CPaaS to work on top, so ongoing maintainance will be kept at a minimum if possible. And have that delivered to you.

Simple. If you pick the right outsourcing vendor. And the right CPaaS vendor. And the stars align just the right way to give you the luck needed in any development project.

How to choose your path?

I am in the process of updating and beefing up my WebRTC PaaS report, which is geared towards the vendor selection process of a CPaaS vendor. This time, I’ll be enhancing it to cover the IT Project, Agency and CPaaS quadrants.

Until I do, I am lowering down the price of the currently available WebRTC PaaS report from $1,950 down to $199. This should make it affordable to anyone who is planning on investing any kind of time or money in this space.

There are a few caveats:

  1. This discount is available ONLY during February. The price will rise in March when the updated report is ready
  2. The report will not include any of the extras – just the report itself
  3. Specifically, it won’t come with any 1-year updates

Those who purchase the report now will be given a discount later on if they wish to go for the 1-year update package. Which means there’s nothing for you to lose when grabbing the report now at a huge discount.

Purchase the WebRTC PaaS report

 

The post How to Choose a WebRTC Development Path? appeared first on BlogGeek.me.

Vidyo.io and Differentiating in the Brave New CPaaS World

Mon, 01/30/2017 - 12:00

Making a CPaaS platform is not easy. Making one that is differentiated? That’s even harder.

Vidyo announced their new CPaaS called vidyo.io last week. It has been around for some time now – I covered it a couple of months ago on SearchUC – Vidyo migrating from enterprise video conferencing to video APIs. Now it is officially out, and I want to take a closer look at it.

Vidyo is/was an enterprise video conferencing company. Their focus throughout the years has been around software based solutions that make use of Scalable Video Coding – SVC. Its main competitors were probably Cisco, Polycom and Avaya – all operating in the same space.

But now, things are rather different. Vidyo has decided to go after two adjacencies at the same time – UCaaS as CPaaS:

  • UCaaS – VidyoCloud offering a managed video conferencing service for the enterprise. A competitor to companies such as BlueJeans and Zoom (the latter just raised $100m at $1b valuation)
  • CPaaS – vidyo.io offering an API for developers that does… video conferencing that can be embedded virtually everywhere

CPaaS isn’t something new. I have been covering the WebRTC part of it for quite some time in my own WebRTC PaaS report. And there are lots and lots of companies in that space already. So why bother with yet-another-one in 2017?

That’s a question for the vendors who join the market, but I’d say this. You can be one of two types of companies:

  1. A direct competitor to Twilio, the jack of all trades when it comes to CPaaS. This means you’ll need to support anything and everything related to communications AND give developers a reason to use your platform instead of Twilio (good luck with that)
  2. Become differentiated in a way that leaves developers no choice but to come to you. How do you do that? Beats me if I know. I just write here

Vidyo took the latter approach with vidyo.io, going for differentiation.

What is it that can make vidyo.io so enticing to developers and differentiate them from the crowd? Here’s what I came up with.

#1 – Razor focus on video

Vidyo does video. Not much more. There aren’t many players in this specific category besides maybe TokBox. ooVoo might be considered as another such player, at least to some extent.

This resistence to fill in the legacy gaps of voice and SMS probably isn’t an easy one. There’s a need to maintain a lead in the technology and the capability of the IP based video communications and at the same time believe that that lead is warranted and perceived by customers as an advantage.

Would you source your video from a second vendor or just take what Twilio or any other CPaaS player you’ve picked up for your phone calling do the video part for you as well? Maybe. Maybe not. Depending how critical video is for you and what features are you looking for. For mission critical applications such as healthcare, financial services or even customer engagement, you might not want to take the risk when you don’t have to.

#2 – Enterprise Savviness

Everyone wants to go after the enterprise these days.

Developers are hard to work with when it comes to developer tools:

  1. They aren’t the ones controlling the budget, so hard to get them to pay
  2. They are usually cheap in how much they are willing to pay
  3. And that’s because they think they can develop what you have on their own. How hard can it be to take a FREE WebRTC thingy and turn it into a managed service?

Mitigating that gap with the longtail world of developers is hard. So once a platform goes there, it will usually try going upmarket to get enterprise customers. This is one of the reasons that Twilio launched an enterprise plan last year.

Vidyo comes from the enterprise world, boasting as its customers government agencies, financial institutes, healthcare providers and contact centers. In all cases, big names who adopted Vidyo technology in one way or the other to add video to their business needs.

#3 – SVC, in all its glory

There’s a notion that SVC is necessary for multiparty video conferences, and while the answer is that it is, SVC done right also improves video quality in mobile devices over error prone networks.

You see, SVC enables you to create different layers:

The sender sends all layers and the server then peels the layers it thinks are the best that can fit to the device they are intended for. So if we send to multiple devices – we can just fit for each device his own view at a lower cost to our server. Which is why SVC is so good for multiparty calls.

For mobile? If I can split a stream into several layers, I can decide to “protect” the lower more important layers by increasing their probability of being received. How do I do that? I add Forward Error Correction. This means you can improve video quality in really bad networks – Vidyo has that in their platform for quite some time, making it one of the few players today with WebRTC that can do that.

There’s a lot more to it than that, as Vidyo has been making use of SVC longer than anyone else out there in the industry. Vidyo is also the only CPaaS vendor in the market who offers that as far as I know. You can read more about it in their technology overview on vidyo.io.

Which brings us to the Google angle.

#4 – The Google angle

We’re now in a world of VP8 and H.264 in WebRTC. That’s the video codecs you see deployed in the market. Will this trend continue? Unlikely. Technology waits for no one. H.264 is old. Its first version was officially released in 2003. It has improved, but it is time to start the transition as an industry to a new video codec. VP8 was meant to be on par with H.264. Its successor is VP9 or H.265).

In all browser based WebRTC implementations we either already have VP9 support or will have VP9 support. And VP9 is where scalability will happen for WebRTC. A great deal of that is taking place by a cooperation between Google and Vidyo – one that I am not privy to.

The end result? Vidyo will probably have the first SVC-enabled video routing infrastructure that works with Chrome’s VP9 implementation. And this, in turn, can give Vidyo a huge headstart over other platforms.

Can Vidyo succeed with vidyo.io?

Yes. It will be a question of execution, as the path isn’t an easy one.

There are two challenges here that Vidyo is taking upon itself:

  1. Migrating to the cloud, with its own business model challenges
  2. Working with developers and selling a platform and not a product, which is a different ballgame with different rules of engagement

The first point is something that Vidyo has already committed to with VidyoCloud. It is a shift that is happening – with or without the developers angle. This means that management and shareholders are already bought into the cloud strategy and will have the patience to wait until Vidyo shifts and sees enough success in this “new” market.

The second point is where the real challenges will occur. There are different things that take place when you work with developers and on a managed service. Marketing works differently. Documentation and support are also handled differently. The sales processes are different. Vidyo will need to fill in these gaps and learn fast while maintaining its position in the enterprise. Being able to do that will mean being able to win more enterprise customers with vidyo.io

CPaaS Differentiation in 2017

As we enter 2016, differentiation in the CPaaS space will be important.

The notion of “cheaper than X” will not work. It leads to a price war where nobody wins. These managed services require scale, but the market is still new and nascent, so real scale isn’t here just yet for the most part. Reducing prices and going down to AWS levels make no sense. Especially if you consider the investment in ongoing development that is required in this market.

Vendors are finding their way in this market, each trying to differentiate and carve its own niche. Vidyo may well have found its own where their strength in multiparty quality over mobile can be a differentiator. Time will tell how successful they will be.

For developers? This is definitely a big win, as there are more alternatives on the table, with Vidyo joining in and adding their enterprise video tech into the mix.

 

Join me and Vidyo for a 2017 WebRTC State of the Market webinar - registration and attendance is free.

The post Vidyo.io and Differentiating in the Brave New CPaaS World appeared first on BlogGeek.me.

WebRTC State of the Market 2017

Thu, 01/26/2017 - 12:00

Yap. It is that time of the year for me.

In the past two months there have been lots and lots of summaries, predictions and even reports thrown around about different markets. That’s what you get at the passing of a year.

I had my share of such articles written recently as well:

But it is now time for something that will become a tradition here I hope, which is the yearly WebRTC Start of the Market infographic. I did one last year, so why not this time around as well?

Yes. I know. This one is almost a month into 2017, and I’ve written 2017 and not 2016 on it. In a sense, it was about practicallity – we care about what comes next more than what happened. To some of us, 2016 is almost a long forgotten memory already.

About the Infographic

What’s different this year in the infographic?

  1. I decided to have a sponsor, and Vidyo were kind enough t oblige and assist. As time pass, it is becoming increasingly difficult to collect and maintain all of my WebRTC dataset fresh, so having vendors pitch in and help make it worthwhile is great. So thanks!
  2. It includes a webinar
    • Last year I had a private Virtual Coffee session on this topic to my customers
    • This year I am doing a public webinar with Vidyo about this topic and the findings
The Webinar

Vidyo and BlogGeek.me will be hosting a webinar on 2017 WebRTC Market Outlook. I’ll be joining Nicholas Reid, which will make this all the more fun (doing a webinar solo is… lonely).

We will be covering the various stats in the infographic, go over trends and see how we think they will develop into 2017. We will also discuss the just announced Vidyo.io CPaaS and see how it fits into this outlook.

2017 is bound to be interesting and dynamic, so join us.

2017 WebRTC Market Outlook

When? Tuesday, February 7, 2017; 3 pm eastern / noon pacific

Where? Online

The Numbers
  • Over 1,100 vendors and projects using WebRTC, and just looking at the adoption numbers of January 2017, I can say we’re in for an interesting ride this year
  • Largest markets using WebRTC? Customer Management and specific verticals (Healthcare and Education lead the way)
  • Outsourcing vendors are cropping like mushrooms after the rain with growth of over 100% in 2016. With the pressure and challenge of finding experienced developers for WebRTC, it is no surprise that many outsourcing vendors are either adding WebRTC to their technology warchest or going all out and focusing in WebRTC projects
  • Live streaming is going strong with 70% growth in 2016. This will continue into 2017 as well, taking a lot of the attention span of enterpreneurs in the social media space
  • In CPaaS there are many ways to split the market. One of them, is by horizontal and vertical players. The dynamics here are truly fascinating, along with the acquisitions in this specific domain (Xura, Twilio and Sinch)
The Infographic

PDF | PNG

What’s next?

That’s the big question, and one I’ll be focusing on a lot this year.

Here’s what you can do next:

  1. Register to the webinar and meet Nicholas and myself there
  2. Subscribe to my newsletter, so you don’t miss out. Lots of interesting announcements coming soon

 

Thanks again for Vidyo for sponsoring this infographic.

The post WebRTC State of the Market 2017 appeared first on BlogGeek.me.

WebRTC is FREE. But Developers Aren’t

Mon, 01/16/2017 - 12:00

If you think WebRTC is free, then think again.

WebRTC is free

I see it everyday and it is glaringly obvious. I probably haven’t made things any better myself with this slide of mine:

WebRTC is free. But lets consider what exactly is free with this technology:

  1. The code. You can go download it from webrtc.org. And then… do with it whatever. It is licensed under BSD
  2. The codecs used – and yes – I know – there are people who feel entitled to them through some patents – but no one yet cares about it – almost everyone assumes it is free and uses it – for now
  3. It is readily available inside browsers. Well… at least some of them (Chrome, Firefox & Edge. Coming to Safari sometime)
  4. And that’s a wrap

It is a hell of a lot to give for free. Especially if you went to sleep ten years ago and just woke up.

What’s missing in WebRTC?

But that’s only a small piece of the puzzle. Or as another slide in my deck usually states:

There are loads of things in WebRTC that you need to do in order to get a service to production. Here’s a shortlist of things that just came to mind:

  1. Selecting and implementing singaling
  2. Writing the application
  3. Installing and deploying TURN servers (preferably at scale)
  4. Adding media servers if needed – and making them work for your scenario
  5. Testing it all
  6. Monitoring it in production
  7. Tweaking and upgrading as you go along
How do you fill in these gaps in WebRTC?

All these complementary solutions come in different shapes and sizes:

  • Open source frameworks of various kinds you can use. Most will be half baked (=require more work to get them to production), or not exactly fit your needs
  • Vendors offering consulting and outsourcing (check out a few of them in the WebRTC Index)
  • Different vendors offering hosted and managed services. From signaling, to NAT traversal, testing & monitoring and complete CPaaS

The funny thing is, that whenever you talk to one of the companies developing with WebRTC, they believe everything in WebRTC should be free.

  1. STUN servers? Free. There are lists of free STUN servers you can use
  2. TURN servers? Free. Or more like “why can’t I find free TURN srevers?” (mind you – you should NEVER use free TURN servers)
  3. Using a WebRTC PaaS vendor? That’s waaaay to expensive. We want to build it on our own to keep costs down

The thing is that building these things on your own will take time and money. Lots of both to be exact.

Same thing about how you end up testing it all or monitoring it. I’d say this is how the market looks like when it comes to testing WebRTC:

So what?

If you are planning a product that needs communications, you should definitely consider WebRTC first. Before anything else. It is probably going to be the cheapest technology for your needs but also the best one.

That said, you shouldn’t consider it all free. Plan it. Budget it. Write down your requirements. Decide on your architecture. Figure out who your partners should be in this road.

This is why I decided to start off the year with a giveaway. $1,000 worth of credits on VoxImplant. And all you have to do is signup for it. It just might get you started on your road with WebRTC. And who knows what will happen later on?

 

The post WebRTC is FREE. But Developers Aren’t appeared first on BlogGeek.me.

Jumpstart Your WebRTC Development with $1000 on VoxImplant

Thu, 01/12/2017 - 00:00

If you got until here and haven’t entered your email yet, then you should definitely go back up.

I decided to kick off 2017 with a few interesting initiatives. And the first one is this giveaway – a first that I am doing on my website.

I’ve been following the CPaaS space for quite some time now, and have focused on WebRTC PaaS. CPaaS, PaaS and other XaaS acronyms are confusing. For the laymen, if you want to develop something that needs communication capabilities but don’t want to host the communication service itself – that’s what you end up using. And if this is your case, then why not try out one of the interesting vendors out there?

VoxImplant was kind enough to offer $1,000 in credits for one person.

To enter, all you need to do is place your email in the large giveaway box at the top – and that’s about it.

Be sure to share this giveaway with others using that URL you’ll be getting, as that will increase your chances of winning – and if enough people join the giveaway – will get you a nice bonus as well (even if you don’t win in this giveaway).

What do you have to lose?

Join now, and maybe it will get you a bit faster to your application goal with the help of VoxImplant.

The post Jumpstart Your WebRTC Development with $1000 on VoxImplant appeared first on BlogGeek.me.

WebRTC RTCPeerConnection. One to rule them all, or one per stream?

Mon, 01/09/2017 - 12:00

How many WebRTC RTCPeerConnection objects should we be aiming for?

This is something that bothered me in recent weeks during some analysis we’ve done for a specific customer at testRTC.

It all started when a customer using Tokbox came to us. He was complaining he couldn’t get the product he built stabilized enough and due to that, couldn’t really get it launched. The reason behind it was partially his inability to decide how many users in parallel can fit into a single session.

So we took that as a side project at testRTC. It is rather easy for us to get 50, 100, 200 or more browsers to direct towards a single service and session and get the analysis we need. So it was easy to use once we’ve written the script necessary. While we have Tokbox customers using our platform, I never did try to go deeper into the analysis until now for such customers. This time, it was part of what the customer expected of us, so it got me looking closer at Tokbox and how they implement multiparty sessions.

In the past couple of weeks we’ve done our digging and got to conclusions of our own. I haven’t meant to write about them here, but a recent question on Stack Overflow compelled me to do so – Maximum number of RTCPeerConnection:

I know web browsers have a limit on the amount of simultaneous http requests etc. But is there also a limit on the amount of open RTCPeerConnection’s a web page can have?

And somewhat related: RTCPeerConnection allows to send multiple streams over 1 connection. What would be the trade-offs between combining multiple streams in 1 connection or setting up multiple connections (e.g. 1 for each stream)?

The answer I wrote there, slightly modified is this one:

Not sure about the limit. It was around 256, though I heard it was increased. If you try to open up such peer connections in a tight loop – Chrome will crash. You should also not assume the same limit on all browsers anyway.

Multiple RTCPeerConnection objects are great:

  • They are easy to add and remove, so offer a higher degree of flexibility when joining or leaving a group call
    They can be connected to different destinations

That said, they have their own challenges and overheads:

  • Each RTCPeerConnection carries its own NAT configuration – so STUN and TURN bindings and traffic takes place in parallel across RTCPeerConnection objects even if they get connected to the same entity (an SFU for example). This overhead is one of local resources like memory and CPU as well as network traffic (not huge overhead, but it is there to deal with)
  • They clutter your webrtc-internals view on Chrome with multiple tabs (a matter of taste), and SSRC might have the same values between them, making them a bit harder to trace and debug (again, a matter of taste)

A single RTCPeerConnection object suffers from having to renegotiate it all whenever someone needs to be added to the list (or removed).

I’d like to take a step further here in the explanation and show a bit of the analysis. To that end, I am going to use the following:

  1. testRTC – the service I’ll use to collect the information, visualize and analyze it
  2. Tokbox’ Opentok demo – Tokbox demo, running a multiparty video call, and using a single RTCPeerConnection per user
  3. Jitsi meet demo/service – Jitsi Videobridge service, running a multiparty video, and using a shared RTCPeerConnection for all users
If you rather consume your data from a slidedeck, then I’ve made a short one for you – explaining the RTCPeerConnection count issue. You can download the deck here.

But first things first. What’s the relationship between these multiparty video services and RTCPeerConnection count?

WebRTC RTCPeerConnection and a multiparty video service

While the question on Stack Overflow can relate to many issues (such as P2P CDN technology), the context I want to look at it here is video conferencing that uses the SFU model.

The illustration above shows a video conferencing between 5 participants. I’ve “taken the liberty” of picking it up from my Advanced WebRTC Architecture Course.

What happens here is that each participant in the session is sending a single media stream and receiving 4 media streams for the other participants. These media streams all get routed through the SFU – the box in the middle.

So. Should the SFU box create 4 RTCPeerConnection objects in front of each participant, each such object holding the media of one of the other participants, or should it just cram all media streams into a single RTCPeerConnection in front of each participant?

Let’s start from the end: both options will work just fine. But each has its advantages and shortcomings.

Opentok: RTCPeerConnection per user

If you are following the series of articles Fippo wrote with me on testRTC about how to read webrtc-internals, then you should know a thing or two about its analysis.

Here’s how that session looks like when I join on my own and get testRTC to add the 4 additional participants into the room:

Here’s a quick screenshot of the webrtc-internals tab when used in a 5-way video call on the Opentok demo:

One thing that should pop up by now (especially with them green squares I’ve added) – TokBox’ Opentok uses a strategy of one RTCPeerConnection per user.

One of these tabs in the green squares is the outgoing media streams from my own browser while the other four are incoming media streams from the testRTC browser probes that are aggregated and routed through the TokBox SFU.

To understand the effect of having open RTCPeerConnections that aren’t used, I’ve ran the same test scenario again, but this time, I had all participants mute their outgoing media streams. This is how the session looked like:

To achieve that with the Opentok demo, I had to use a combination of the onscreen mute audio button and having all participants mute their video when they join. So I added the following lines to the testRTC script – practically clicking on the relevant video mute button on the UI:

After this most engaging session, I looked at the webrtc-internals dump that testRTC collected for one of the participants.

Let’s start with what testRTC has to offer immediately by looking at the high level graphs of one of the probes that participated in this session:

  1. There is no incoming data on the channels
  2. There is some out going media, though quite low when it comes to bitrate

What we will be doing, is ignore the outgoing media and focus on the incoming one only. Remember – this is Opentok, so we have 5 peer connections here: 1 outgoing, 4 incoming.

A few things to note about Opentok:

  1. Opentok uses BUNDLE and rtcp-mux, so the audio and video share the same connection. This is rather typical of WebRTC services
  2. Opentok “randomly” picks SSRC values to be numbered 1, 2, … – probably to make it easy to debug
  3. Since each stream goes on a different peer connection, there will be one Conn-audio-1-0 in each session – the differences between them will be the indexed SSRC values.

For this test run that I did, I had “Conn-audio-1-0 (connection 363-1)” up to “Conn-audio-1-0 (connection 363-5)”. The first one is the sender and the rest are our 4 receivers. Since we are interested here in what happens in a muted peer connection, we will look into “Conn-audio-1-0 (connection 363-2)”. You can assume the rest are practically the same.

Here’s what the testRTC advanced graphs had to show for it:

I removed some of the information to show these two lines – the yellow one showing responsesReceived and the orange one showing requestsReceived. These are STUN related messages. On a peer connection where there’s no real incoming media of any type. That’s almost 120 incoming STUN related messages in total for a span of 3 minutes. As we have 4 such peer connections that are receive only and silent – we get to roughly 480 incoming STUN related messages for the 3 minutes of this session – 160 incoming messages a minute – 2-3 incoming messages a second. Multiply the number by 2 so we include also the outgoing STUN messages and you get this nice picture.

There’s an overhead for a peer connection. It comes in the form of keeping that peer connection open and running for a rainy day. And that is costing us:

  • Network
    • Some small amount of bitrate for STUN messages
    • Maybe some RTCP messages going back and forth for reporting purposes – I wasn’t able to see them in this streams, but I bet you’d find them with Wireshark (I just personally hate using that tool. Never liked it)
    • This means we pay extra on the network for maintenance instead of using it for our media
  • Processing
    • That’s CPU and memory
    • We need to somewhere maintain that information in memory and then work with it at all times
    • Not much, but it adds up the larger the session is going to be

Now, this overhead is low. 2-3 incoming messages a second is something we shouldn’t fret about when we get around 50 incoming audio packets a second. But it can add up. I got to notice this when a customer at testRTC wanted to have 50 or more peer connections with only a few of them active (the rest muted). It got kinda crowded. Oh – and it crashed Chrome quite a lot.

Jitsi Videobridge: Shared RTCPeerConnection

Now that we know how a 5-way video call looks like on Opentok, let’s see how it looks like with the Jitsi Videobridge.

For this, I again “hired” the help of testRTC and got a simple test script to bring 4 additional browsers into a Jitsi meeting room that I joined with my own laptop. The layout is somewhat different and resembles the Google Hangouts layout more:

What we are interested here is actually the peer connections. Here’s what we get in webrtc-internals:

A single peer connection for all incoming media channels.

And again, as with the TokBox option – I’ll mute the video. For that purpose, I’ll need to get the participants to mute their media “voluntarily”, which is easy to achieve by a change in the testRTC script:

What I did was just was instruct each of my automated testRTC friends that are joining Jitsi to immediately mute their camera and microphone by clicking the relevant on-screen buttons based on their HTML id tags (#toolbar_button_mute and #toolbar_button_camera), causing them to send no media over the network towards the Jitsi Videobridge.

To some extent, we ended up with the same boring user experience as we did with the Opentok demo: a 5-way video call with everyone muted and no one sending any media on the network.

Let’s see if we can notice some differences by diving into the webrtc-internals data.

A few things we can see here:

  • Jitsi Videobridge has 5 incoming video and audio channels instead of 4. Jitsi reserves and pre-opens an extra channel for future use of screen sharing
  • Bitrates are 0, so all is quiet and muted
  • Remeber that all channels here share a single peer connection

To make sure we’ve handled this properly, here’s a view of the video channels’ bitrate values:

There’s the obvious initial spike – that’s the time it took us to mute the channels at the beginning of the session. Other than that, it is all quiet.

Now here’s the thing – when we look at the active connection, it doesn’t look much different than the ones we’ve seen in Opentok:

We end up with 140 incoming messages for the span of 3 minutes – but we don’t multiply it by 4 or 5. This happens once for ALL media channels.

Shared or per user RCTPeerConnection?

This is a tough question.

A single RTCPeerConnection means less overhead on the network and the browser resources. But it has its drawbacks. When someone needs to join or leave, there’s a need to somehow renegotiate the session – for everyone. And there’s also the extra complexity of writing the code itself and debugging it.

With multiple RTCPeerConnection we’ve got a lot more flexibility, since the sessions are now independent – each encapsulated in its own RTCPeerConnection. On the other hand, there’s this overhead we’re incurring.

Here’s a quick table to summarize the differences:

What’s Next?

Here’s what we did:

  1. We selected two seemingly “identical” services
    • The free Jitsi Videobridge service and the Opektok demo
    • We focused on doing a 5-way video session – the same one in both
    • We searched for differences: Opentok had 5 RTCPeerConnections whereas Jitsi had 1 RTCPeerConnection
  2. We then used testRTC to define the test scripts and run our scenario
    • Have 4 testRTC browser probes join the session
    • Have them mute themselves
    • Have me join as another participant from my own laptop into the session
    • Run the scenario and collect the data
  3. Looked into the statistics to see what happens
    • Saw the overhead of the peer connection

I have only scratched the surface here: There are other issues at play – creating a RTCPeerConnection is a traumatic event. When I grew up, I was told connecting TCP is hellish due to its 3-way handshake. RTCPeerConnection is a lot more time consuming and energy consuming than a TCP 3-way handshake and it involves additional players (STUN and TURN servers).

Rather consume your information from a slidedeck? Or have it shared/printed with others in your office? You can download the RTCPeerConnection count deck here.

 

The post WebRTC RTCPeerConnection. One to rule them all, or one per stream? appeared first on BlogGeek.me.

Can WebRTC save telephony?

Fri, 12/23/2016 - 12:30

There is no real peak telephony.

[Chad Hart is no stranger to my readers here. He runs webrtcHacks, part of the Kranky Geek team and works at Voxbone. This time, he takes a look at telephony and where it stands today – with and without WebRTC]

Back in April of 2015, I recall Google WebRTC Product Manager Serge LaChapell talking about the WebRTC team’s focus on mobile and how they wanted to kick “VoLTE’s butt”. To be fair he was referencing call connection times, but reading between the lines I like to believe he has had ambitions well beyond that – namely beating VoLTE and the traditional telephony network in minutes.

"We want to kick VoLTE's butt" @slac talking about #webrtc mobile improvements pic.twitter.com/f1aVnSbgMK

— Chad Hart (@chadwallacehart) April 15, 2015

For many years I have tried to keep track of how the traditional telecoms has fared against the emerging VoIP application world (what they sometimes derogatorily call “OTT”).  I have had two hypotheses for several years now:

  1. Traditional telecoms over the PSTN is past “peak” and will continue to decline
  2. Real time communications (RTC) in general has been on the decline but is poised to make a comeback thanks to better implementations and technologies like WebRTC

Let’s check the data to test these statements.

Peak Telephony? Maybe Not…

Digging into the statistics from various sources, I was surprised to find I was wrong about my first hypothesis on peak telephony.

The US market

Let’s start by taking a look at the situation in the US, one of the world’s largest communications markets. The Consumer Telecommunications Industry Association (CTIA) provides an annual update that sometimes includes Minutes of Use (MoU) and subscriber data. The data shows that on a subscriber-level, mobile telephony usage already peaked on a per-subscriber basis in 2007.

However, there is a growing number of data-only subscriptions for our tablets and other devices counted as subscribers. This negatively skews the numbers. Looking at “minutes” on a per capita basis is a cleaner metric, so let’s divide the minute figures by the US population. This shows a much more interesting picture where mobile phone usage for traditional calling actually went up by 16%.


Total US cellular telephony minutes appear to be rising after stalling for many years (
my calculations)

Checking the data against other FCC sources, this growth may be overstated but there is no clear evidence of decline. So what’s going on? Much of cellular’s continued volumes can be attributed to fixed-mobile substitution – both in terms of people dropping their fixed lines and as the FCC reports  “A significant percentage of homes with both landline and wireless phone access received all or almost all calls on wireless telephones despite also having a landline telephone.” If we assumed total PSTN calling was flat, then according to my estimates, a 30% annual decline in fixed line minutes would be required to explain the decrease. This is possible, but way faster than past usage declines in fixed so it is more likely cellular usage did indeed have a very good year in 2015.

There is no clear evidence of peak PSTN telephony in the US, so let’s check some other sources.

The UK market

The UK’s Ofcom is generally a much better datasource than the FCC since they look at communications as a whole within the UK and compare it to other countries.

They are a lot more pessimistic when it comes to PSTN-based telephony. Their data is very definitive showing a continued, gradual decline in PSTN call volumes going back to 2010.  With a -3% 5 year CAGR, no matter how you cut it, “operator” traffic is down.

Ofcom CMR 2016 report shows declines in operator voice usage

They have not released their global figures for 2015, but their 2014 report showed similar trends with mature markets declining (US, Western Europe, Japan & Korea). However, emerging marketings like China, India, and Russia show show growth and just make up for declines in the mature markets in 2014.

Does anyone care about the PSTN anyway?

Outside of adding touch tone dialing and going cordless, the Public Switched Telephone Network (PSTN) telephony user experience hasn’t exactly changed a whole lot in a hundred years. The PSTN is only one way to make calls – now we dedicated VoIP apps,  messenger apps with voice, and a growing number of video communications options. Do these new forms for RTC give us any hope of reversing traditional telephony’s demise? The data here is more positive.

Ofcom’s data shows an increasing usage of VoIP for voice calls and a very definitive increase in video call usage. This is consistent with their international research from a year earlier:

VoIP Apps Save the Day

So where do newer RTC apps and features fit into all this? Using Ofcom’s methodology, the 18 countries they track produce somewhere around 10 Trillion minutes a year. Microsoft has previously claimed Skype does up to 3 Billion minutes a day – that’s a Trillion minutes a year if one assumes around a 3 Billion daily average. Even if the true annual value is half of that, clearly Skype alone is meaningful compared to PSTN volumes.

Apple does not release any figures for its FaceTime service introduced in 2011, but presumably its usage is substantial, although less than Skype’s based on Ofcom’s past user surveys. WeChat, Line, and Viber all have more than 200 million monthly active users with various VoIP features. WhatsApp now has more than 1 billion MAU. Its VoIP calling feature launched in April 2015 has more than 100 million voice calls a day. Taken together, these other VoIP services are easily more than a trillion minutes a year.

At 10 to 20% of the PSTN’s volume, clearly VoIP traffic has a ways to go before it dominates the PSTN, but there is no doubt its volumes are meaningful in comparison.  Furthermore, these services are still growing. Certainly some of that growth will come at the expense of the PSTN, but it appears they are also encouraging more RTC use in general.

Does WebRTC matter?

WebRTC does not factor heavily into the services cited above, but that is poised to change. At only 5 years old, WebRTC has not had that much time to widely establish itself in relation to other VoIP technologies. Still, there are a few notable standouts – particularly notably Facebook Messenger. Facebook has stated it has more than 300 million monthly active users of Messenger’s VoIP features and just this week announced it had 245 million monthly video users. Other notable users include Snapchat and of course Google’s Hangouts and Duo services.

There are a lot of other WebRTC apps showing big user gains too such as Houseparty which reported it had 20 million minutes of usage a day last month – not bad for an app that only emerged from the ruins of Meerkat a few months ago. In addition, more traditional VoIP apps like Whatsapp and Skype are starting to use WebRTC, albeit in limited circumstances today but that will certainly grow too.

In aggregate, I estimate WebRTC-based services easily have over 500 million MAU this year across 2 billion devices. Comparing this to other VoIP technologies at the 5 year mark, WebRTC is way ahead. This bodes well for WebRTC to be an incremental driver of VoIP traffic and further accelerator of RTC.

Conclusions

I have been concerned that the desire of people communicate in real time reached its pinnacle long ago. Why focus on RTC if the trend is clearly toward “messaging” and other forms of textual interaction?  Has telephony peaked? The evidence suggests that is probably the case for the PSTN in developed markets, but there are plenty of pockets of growth. Where declines exist, they are gradual. Even better, there is a large body of evidence that VoIP services are more than making up the gaps of any declines and then some. This indicates that we are actually using real time communications more than even. The recent and rapid rise of many WebRTC services is a further shows that this trend is very likely to continue, or perhaps even accelerate. That’s great news for the hundreds of WebRTC vendors out there and those that have yet to come.

The post Can WebRTC save telephony? appeared first on BlogGeek.me.

The Best WebRTC Security is Prone to the Stupidest Developer

Mon, 12/19/2016 - 12:00

WebRTC is the most secure technology for video communications. And yet – developers can screw this for you.

There is a rise in security breaches and data theft incidents in 2016. You see this from the amount of information out there. I’ve written about WebRTC and security for quite some time, but a recent post I’ve read compelled me to write about it again.

The post? Red Cross Blood Service in Australia leaks personal data.

The gist:

  • Site is secure
  • A contractor places database dump on the internet for backup
  • And that get found

It probably happens more often than not. You build a service. You take care of its security. And then, someone down the lines screws you over with his maintenance processes. To some extent, this is just as bad as social engineering, where a hacker tries to gain access by fooling people to believe he is someone else.

Make sure to download the WebRTC Security checklist. Print it and stick it on the wall behind your monitor so you don’t forget.WebRTC Security baseline

WebRTC comes with a few security concepts that are quite new and innovative in VoIP:

  1. In WebRTC, EVERYTHING is encrypted. Not only by default, but also in a way that can’t be modified – there is no way to send data over WebRTC in the clear
  2. WebRTC forces you to operate over HTTPS and WSS in your web application, so signaling gets encrypted as well
  3. Screensharing requires an additional layer of consent, be it whitelisting of your site or a creation of a browser extension
  4. Browsers today update frequently and automatically, so any security threat found gets patched faster than most enterprise and VoIP vendors react to their security breaches

The thing people forget is that WebRTC is just a piece of technology. A building block. It is up to the developers to decide how to use it in their own product. During that integration, security breaches can be created quite easily.

In the WebRTC course I launched two months ago, I’ve added a lesson dealing with WebRTC security. It goes through the mechanisms that exist in WebRTC and the areas that need to be further secured by the application.

Two big issues left to developers today are TURN passwords and access to backend server resources.

#1 – TURN passwords

TURN servers predate WebRTC. They are used by SIP (or at least are found in the spec), and there, the notion is that the user agent (=device/endpoint) is secure and “named”. So a username and password mechanism was created to get a TURN binding. The reason you want such a mechanism in the first place is because TURN servers are bandwidth hogs – they relay media, and by doing that they cost a lot in terms of bandwidth. So if you are paying for it, you don’t want others to piggyback on it.

The problem with this approach in WebRTC is that the username and password needs to be passed from your JavaScript code inside the browser to the server. Which means that information is available in the clear for many use cases – those where you don’t need or want the user to identify with the network at all.You also don’t want someone sniffing your code in the browser and then reusing these credentials elsewhere.

The current approach out there is to use temporary passwords (I like calling them ephemeral – it makes me sound intelligent). Ones that become useless in an hour or two.

This means that someone in your backend randomly creates a password that is short-lived and shares it with both the TURN server and the client.

The above illustrates how this is done.

  • The App Server, in charge of signaling in this case, creates a password. It updates the TURN server about said password and also gives that information to the User
  • The User then creates a peer connection, configuring the TURN server in it with the relevant temporary password

Great.

Now lets add a media server into the mix.

Who should be generating that password and passing it around to whom? Should the Media Server now be in charge of it, or is it up to the App Server still to take care of this?

Which leads me to the second important security aspect of WebRTC when it comes to your development – backend server resources you need to protect.

#2 – Backend server resources

In many cases, I find that when the work is outsourced, the end result tends to be a jumble of an architecture if things aren’t thought out properly from the beginning.

This usually causes the wrong servers to need to connect and communicate directly with the User. While not an issue on its own, it can easily turn into a headache:

  • Not having a clear picture of the state in your backend means you lose control – this can turn ugly when issues arise
  • Opening up more of your backend towards the internet means more points to secure against penetration
    • And yes – I know there’s a trend to treat servers in the cloud as if they are always open to the internet
    • Which means you need to think about how best to protect them in the first place anyway, which happens to be closing them as much as possible

What I suggest in many cases is:

  1. Media servers should never be controlled or accessed directly from the Internet
  2. Media servers should only pass media to and from the Internet
  3. Whenever they need to be controlled, you do that using backend-to-backend communication from other servers you have that are already managing the users on the Internet
What’s next?

I am not a security expert. I know a bit about it and try to stay informed, but I am by no means an expert in it.

You should make sure to take security into consideration when developing your service and don’t assume WebRTC does everything for you. It doesn’t, but it is the best starting point you’ll get.

If you want to learn more about WebRTC, I will be opening the course again for another round. Probably during April.

If you are a corporate looking to have an open access to course materials throughout the year for your workforce – I am going to announce such a plan soon, but feel free to reach out to me before that happens.

Just do me a favor – don’t leave WebRTC security to chance.

Need a reminder? Download the WebRTC Security checklist. Print it and stick it on the wall behind your monitor so you don’t forget.

The post The Best WebRTC Security is Prone to the Stupidest Developer appeared first on BlogGeek.me.

How to Write the WebRTC Requirements for Your New Product?

Mon, 12/12/2016 - 12:00

Got a requirements document to write for WebRTC? Here’s a step by step guide to doing just that.

Here is something that I do with my customers quite often. In many cases, when I consult vendors, they are in the process of building a new product or integrating an existing product with some new communication capabilities. This involves using WebRTC and outsourcing the actual development.

More often than not, I find myself writing the baseline of the requirements document for the customer, to server as a WebRTC RFP (Request For Proposals) that get used to communicate the requirements with the potential outsourcing vendors.

I wanted to share the process that I use in writing the first draft of this document. To make this a bit more useful, let’s assume that what we want to do is build a webinars service, where a few people can join as the speakers in the webinar and people online can “listen in”.

I’ve created a WebRTC requirements template and a sample webinars requirements document that you can use when you need to write the requirements for your own product.

Get the WebRTC Requirements Template and Sample Webinars Requirements Document

Here’s step by step how I’d go about doing that.

Step #1: Structure your document

First things first. To make sure I don’t forget anything, I like to split my requirements document into 4 sections:

  1. Overview
  2. Architecture
  3. Functional
  4. Non-functional

As you can see in above, I place TBD for each section in the document. I do that for all sub sections that I add to the document as well. This way, I can easily search the areas that haven’t been filled in properly yet when I work on it. Most often than not, writing these WebRTC requirements take a couple of hours and span a few days because they are collaborative in nature.

I tend to leave out the mechanics around the project – such as the price model I am looking for, or the timeline of the project. These tend to change between companies and they often better reflected elsewhere than in the technical requirements that I try to describe here.

Step #2: Write the overview

First thing I do once I have the template ready for my needs is write the overview part.

I try to keep the overview short and sweet, with a focus on making sure people understand what it is that I am trying to achieve in the service – what my challenges are and what I consider as success.

Usually, 2-3 paragraphs should be enough.

Step #3: Describe the architecture

Now it is time to start thinking about our architecture. By that, I don’t mean the architecture of the solution, what processes, servers and switches I want – I leave that for the vendor to fill in. What I mean is the entities I have in my service, trying to focus around the session – the types of media and signaling I want running there.

I do this by going analog, and just jot it down on my whiteboard and taking a picture of the end result. I find this more natural for me than using Powerpoint or Vizio. Later on, I might redo it as a Powerpoint diagram, but more often than not, I just leave it as is.

Above is the drawing I just did to describe the BlogGeek.me Webinar I just invented.

After the visual, I explain the different entities that are in the drawing and the relations between them. This part is really important, as oftentimes, it will reveal entities or flows that I haven’t thought about earlier.

In the case of the BlogGeek.me Webinar, we’ve got multiple potential Speakers who interact using audio and video with each other in the Webinar, which then gets sent to multiple Viewers and also to an external Storage.

I try to keep things focused and to the bare minimum that is necessary for the understanding of the service.

Step #4: Fill in the features

To some extent, this step is the main chunk of what the product does. For me, this is a brain dump of the things a user should be able to do in the system.

There are different types of features you might be needing. I focus on those that relate to the communication part of the product and nothing else.

Here’s a checklist of what I usually go through when doing this:

  • Is this a 1:1 service or multiparty?
  • Audio? Video?
  • Any screen sharing or other collaboration capabilities?
  • Messaging/chat?
  • How do users get authorized, authenticated and connected?
  • Any in-session controls users need to have?
  • Any indicators to show on their display?
  • Do we need recording capabilities?
  • Anything that we missed?

Make sure you answer all the questions above as requirements in the document if they make sense and add your own to the list.

Here are a few of the ones I’ve written for the webinars product:

Notice how I’ve indicated that connectivity via PSTN is optional in a future phase? This serves two purposes for me:

  1. It gives the vendor a hint of what architecture to put in place to support this later on down the road
  2. It also gives the vendor a feeling that this is a journey and not a one-off project. He will be more committed to its success if he knows you might call on him later on to improve and extend the service
Step #5: Handle the non-functional requirements

Now it is time to go over the non-functional requirements. These are the boring and ugly details that can make or break a service, so spend enough time on this one.

What do I mean by non-functional? These will usually be things you will take for granted, but the vendor won’t. To reduce friction and arguments in the future, I add these. In most likelihood, if you don’t write these down, a vendor will ask about a few of these things anyway – so just write them down to reduce the unnecessary round trips and to make sure you and the vendor are on the same page.

I tend to split this section to 5 subsections, each with its own focus:

1. Devices

Here I list all the devices I want to support. Browsers, operating systems, mobile devices, etc.

Each gets its own special treatment. Things I usually look at here are:

  • Which browsers to support natively?
  • Do I need an Electron PC app? If I do, then on which operating systems?
  • What versions of iOS to support? Which earliest devices?
  • What versions of Android API to support? How many specific devices do I want tested?

In many ways, I derive the requirements here based on the WebRTC Device Cheat Sheet that I published.

2. NAT traversal

NAT traversal is often overlooked. There are two areas where I cover NAT traversal – here and in the Security subsection below.

Here, I define who takes care of it – do I expect the vendor to bring a NAT traversal solution, will I be doing it, or should they use a third party hosted service (there are a few out there offering it).

The second part that I sometimes decide here, but not always, is where I want it deployed – along with the media servers or closer to the connecting user. It is a matter of architecture needs that I prefer leaving to the vendor to fill in but not always (can’t really say when in a definitive way).

In my webinar example, I decided to make things easy and just use a third-party hosting service:

3. Scalability

For scalability I make sure I cover a few areas:

  1. What’s the scale of a single session? Just make sure that if there are different types of users, you indicate how each can scale
  2. What’s the scale of the service as a whole? How many sessions can exist concurrently?
  3. Do I need to address any geographical locations when it comes to scaling?
  4. How are the different parts of the systems scale? Independently of each other?

Here’s how I fit it into our webinar example:

4. Security

The security part is slightly tricky. First, because I am not an expert. But also because almost nobody is.

What I usually place here is the basics of how I’d like to see the backend (encryption between the servers), but I do cover two important areas:

  1. Media servers. I prefer access to them to be limited to the application server only when it comes to control and have all signaling be routed through the application server or a signaling server. I don’t like giving access to my resource hog openly over the internet. Call me old fashioned
  2. TURN servers. Here I always state that I want ephemeral passwords. Otherwise, vendors usually do the short route of using a static username and password, which in WebRTC is like no password at all on the TURN server
5. DevOps

The DevOps section deals with things required to run this product on a daily basis. I tend to fill in three main things here:

  1. Hosting – is this planned to be deployed on premise? In a specific cloud provider? Using Docker or some other container technology?
  2. Reporting – what type of information do you want to collect to generate reports on the use of the system? These should be for offline use – think a daily email or something similar
  3. Events and statistics collection – this is what you want to be able to collect to monitor the health of the service in realtime
Step #6: Do a one-over and share

Now that we’ve written t all, time to go over the whole document to make sure things aren’t missing:

  1. Clean up any leftover TBDs
  2. Add clarifications where necessary
  3. Add things not in the template

Here’s what I decided to add to the webinar example:

As you can see, for me, open source was really important.

Now that you are done – go share the document with your colleagues, and once approved internally, it is time to share with potential outsourcing vendors.

Why so short?

To some, this approach may seem a bit shallow. It doesn’t include all corner cases or describe in a lot of detail what goes on. The thing is, that there is a balance between what you can effectively do and achieve as a small startup or even a big company with a new project than what you’d do on a long running multi-year millions of dollars project.

For me, this proves itself as a good way to capture the essence of what it is that needs to be developed and getting replies from potential vendors to building the product. Once I get the replies, it is time to go over them and see who makes the most sense – a lot based on how they replied to the RFP in the first place.

What’s next?

So here’s how you should write your next WebRTC requirements document:

Step #1: Structure the document to make sure all bases are covered

Step #2: Focus on the overview – explain what your product needs to achieve

Step #3: Draw the architecture and explain it

Step #4: Write down your functional requirements

Step #5: Write down all non-functional requirements

Step #6: Do a one-over to make sure you didn’t miss anything

I’ve built a WebRTC Requirements Template document for you. You can copy it and fill it in with the requirements of your own product. It already holds many of the questions you’ll need to answer, so it can serve as a guide for you.

Now, to write this article, I also had to create a real-world example (remember our webinar service?). This example is also shared so you can see how I write things down.

Get the WebRTC Requirements Template and Sample Webinars Requirements Document

Oh, and if you still need help – I do offer a consulting service, where a lot of the time invested is placed into writing these requirements documents, finding suitable potential vendors and going over their responses.

The post How to Write the WebRTC Requirements for Your New Product? appeared first on BlogGeek.me.

WebRTC and Education – the Webinar Edition (and a Bonus)

Mon, 12/05/2016 - 12:00

Want to learn more about WebRTC in education?

Next week, testRTC will be hosting a webinar titled How WebRTC ushers the next wave of e-Learning innovation. As a co-founder of testRTC, I am tasked with the actual creation and hosting of the webinar, which means I will be speaking about what vendors are doing WebRTC when it comes to education and where I see their challenges.

I haven’t done a webinar in quite some time, so this is going to be fun for me.

We’ve decided to use Crowdcast as our webinars platform for it. Partially because it is a WebRTC based service, and I do love dog fooding. But also because I received some good reviews about it.

If I had to pick two very active verticals in the domain of WebRTC, these would be healthcare and education. We see this also at testRTC, where we help these vendors in testing and deploying their services to production.

Next Wednesday

So here’s what we’re doing next Wednesday – me and you:

  • On December 14 at 14:30 EDT, we’re going to meet online
  • I am going to give a few interesting examples of how education looks like when it meets WebRTC
  • Then talk about some of the challenges involved
  • You will have time to ask questions. I’ll answer them to the best of my ability
  • And then I am going to give you a bonus
The examples

The examples part of the webinar is probably going to be the most interesting one.

I remember talking almost 3 years ago with a startup in India about their use case. It was related to education and it blew my mind. It was so starkly different than what I assumed a startup in India would do within their local market for education that I saw it as my own private lesson. Since then, I talked with tens of vendors in this space. Each doing his own thing. Each focusing on solving a problem in tutoring. They are so wide in variety that you can’t even look at them as a single market.

But this is exactly what we will try to do here. I am going to categorize them a bit – I wonder where you will find yourself in that categorization.

The challenges

Learning has its challenges for the student, the teacher and now also for the platform.

My intent is to look at the challenges of the platform – what are the things necessary to put these different education systems in production and how to make sure they work properly.

For the various types of education platforms, I’ll give you tips for where you should focus with your testing – what are the weak spots to look for – so you can find and deal with them before your customers do.

The bonus

I am not going to say what the bonus is now – it will ruin the surprise. I will say though, that this is something you’ll find immediately useful.

The bonus will be available only to those who will be with me during the webinar itself, so register now and save your place.

What’s next?Register of course!

And feel free to write down your questions in advance – Crowdcast allows for that.

The post WebRTC and Education – the Webinar Edition (and a Bonus) appeared first on BlogGeek.me.

WebRTC, TURN and Geolocation. How to Pick the Best Server to Work With?

Tue, 11/29/2016 - 12:00

Different ways to do the same thing.

One of the biggest problems is choice. We don’t like having choice. Really. The less options you have in front of you the easier it is to choose. The more options we have – the less inclined we are to make a decision. It might be this thing called FOMO – Fear Of Missing Out, or the fact that we don’t want to make a decision without having the whole information – something that is impossible to achieve anyway, or it might be just the fear of committing to something – commitment means owning the decision and its ramifications.

WebRTC comes with a huge set of options to select from if you are a developer. Heck – even as a user of this technology I can no longer say what service I am using:

  • I use Drum for my Virtual Coffee sessions (haven’t done one in some time. Should do one next month)
  • I now use Jitsi meet for my Office Hours
  • Google Hangouts for testRTC meetings with customers
  • Whatever a customer wants for my own consultation meetings, which varies between Hangouts, Skype, appear.in, talky, GoToMeeting, WebEx, … or the customer’s own service

In my online course, there’s a lesson discussing NAT traversal. One of the things I share there is the need to place the TURN server as close as possible to the edge – to the user with his WebRTC client. Last week, in one of my Office Hour sessions, a question was raised – how do you make that decision. And the answer isn’t clear cut. There are… a few options.

My guess is that in most cases, the idea or thought of taking a problem and scaling it out seems daunting. Taking that same scale out problem and spreading it across the globe to offer lower latency and geolocation support might seem paralyzing. At the end of the day, though it isn’t that complex to get a decent solution going.

The idea is you’ve got a user that runs on a browser or a mobile device. He is trying to reach out to your infrastructure (to another person probably, but still – through your infrastructure). And since your infrastructure is spread all over the globe, you want him to get the closest box to him.

How do we know what’s closest? Here are two ways I’ve seen this go down with WebRTC based services:

Via DNS

When your browser tries to reach out the server – be it the STUN or TURN server, the signaling server, or whatever – he ends up using DNS in most cases (you better use DNS than an IP address for these things in production – you are aware of it – right?).

Since the DNS knows where the request originated, it can make an informed decision as to which IP address to give back to the browser. That informed decision is done in the infrastructure side but by the DNS itself.

One of the popular services for it is AWS Route 53. From their website:

Amazon Route 53 Traffic Flow makes it easy for you to manage traffic globally through a variety of routing types, including Latency Based Routing, Geo DNS, and Weighted Round Robin.

This means you can put a policy in place so that the Route 53 DNS will simply route the incoming request to a server based on its location (Latency Based Routing, Geo DNS) or based on load balancing (Weighted Round Robin).

appear.in, for example, is making use of route53.

Amazon Route 53 isn’t the only such service – there are others out there, and depending on the cloud provider you use and your needs, you may end up using something else.

Via Geo IP

Another option is to use a Geo IP type of a service. You give your public IP address – and get your location in return.

You can use this link for exampleto check out where you are. Here’s what I get:

A few things that immediately show up here:

  • Yes. I live in Israel
  • Yes. My ISP is Bezeq
  • Not really… Tel-Aviv isn’t a state. It is just a city
  • And I don’t live in Bat Yam. I live in Kiryat Ono – a 20km drive

That said, this is pretty close!

Now, this is a link, but you can also get this kind of a thing programmatically and there are vendors who offer just that. I’ve head the pleasure to use MaxMind’s GeoIP. It comes in two flavors:

  1. As a service – you shoot them an API and get geo IP related information, priced per query
  2. As a database – you download their database and query it locally

There’s a kind of a confidence level to such a service, as the reply you get might not be accurate at all. We had a customer complaining at testRTC servers which jinxed his geolocation feature and added latency. His geo IP service thought the machine was in Europe while in truth it was located in the US.

The interesting thing is, that different such services will give you different responses. Here’s where I am located base (see here):

As you can see, there’s a real debate as to my exact whereabouts. They all feel I live in Israel, but the city thing is rather spread – and none of them is exact in my case.

So.

There are many Geo IP services. They will differ in the results they give. And they are best used if you need an application level geolocation solution and a DNS one can’t be used directly.

Telemetry

When inside an app, or even from a browser when you ask permission, you can get better location information.

A mobile device has a GPS, so it will know the position of the device better than anything else most of the time. The browser can do something similar.

The problem with this type of location is that you need permission to use it, and asking for more permissions from the user means adding friction – decide if this is what you want to do or not.

What’s next?

I am sure the DNS option is similar in its accuracy level to the geo IP ones, though it might be a bit more up to date, or have some learning algorithm to handle latency based routing. At the end of the day, you should use one of these options and it doesn’t really matters which.

Assume that the solution you end up with isn’t bulletproof – it will work most of the times, but sometimes it may fail – in which case, latency will suffer a bit.

 

Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.

 

The post WebRTC, TURN and Geolocation. How to Pick the Best Server to Work With? appeared first on BlogGeek.me.

Kranky Geek 2016 SF: Mobile WebRTC

Mon, 11/21/2016 - 12:00

Kranky Geek last week was quite a rush.

Wow.

What can I say. Last week, our Kranky Geek event was so much fun.

I won’t bore you with the details. We’ve focused this time on WebRTC in mobile. Got the best speakers possible – really. And had a blast of an event. I received so much positive feedback that it warms my heart.

I’d like to thank our sponsors for this event: Google, Vidyo, Twilio and TokBox. Without them, this event wouldn’t have been possible.

The videos are available online, and below you’ll find the playlist of the event:

Tomorrow we’re doing another Kranky Geek event. This time in Sao Paulo, Brazil. Different theme. Different sessions. I am dead tired, but working hard with Chad and Chris to make that a huge success as well. See you soon!

 

The post Kranky Geek 2016 SF: Mobile WebRTC appeared first on BlogGeek.me.

My WebRTC Device Cheat Sheet

Mon, 11/14/2016 - 12:00

All you wanted to know but didn’t know how to ask.

2 billion Chrome browsers? 7 billion WebRTC enabled devices by 2017? 50 billion IoT devices?

At the end of the day, who cares? What you are really interested in is to make sure that the WebRTC product you develop will end up working for YOUR target customers. If these customers end up running Windows XP with Internet Explorer 6 then you couldn’t care less about Apple, Safari and iOS support. But if what you are targeting is a mobile app, then which browser supports webRTC is less of an issue for you.

To make things a bit simpler for you, I decided to create a quick Cheat Sheet. A one pager to focus you better on where you need to invest with your WebRTC efforts.

This cheat sheet includes all the various devices and browsers, and more importantly, how to get WebRTC to work on them.

So why wait? Grab your copy of the cheat sheet by filling out this form:

  • Name* First Last
  • Company*
  • Email*
  • CommentsThis field is for validation purposes and should be left unchanged.
jQuery(document).bind('gform_post_render', function(event, formId, currentPage){if(formId == 9) {} } );jQuery(document).bind('gform_post_conditional_logic', function(event, formId, fields, isInit){} );jQuery(document).ready(function(){jQuery(document).trigger('gform_post_render', [9, 1]) } );

 

 

The post My WebRTC Device Cheat Sheet appeared first on BlogGeek.me.

Desktop browsers support in WebRTC – a reality check

Mon, 11/07/2016 - 12:00

Time for a quick reality check when it comes to browsers and WebRTC.

I know you’ve been dying for Apple to support WebRTC in Safari. I am also aware that without WebRTC in your Microsft Internet Explorer 6 that you have deployed in your contact center there is no way for WebRTC to become ubiquitous or widely adopted. But hear me out please.

Browsers market share

The recent update by NetMarketShare on the desktop browsers market share is rather interesting:

It shows the trend between the various desktop browsers for the last year or so.

Here are some things that comes to mind immediately:

  • Google Chrome now has 55% market share. Its rise has stalled somewhat in the last couple of months
  • Microsoft Internet Explorer is still free falling. It will probably stop somewhere at 10% or so if you ask me
  • While Chrome gained the most users from Internet Explorer, it seems that Firefox has picked up users from Internet Explorer in the past two months
  • Microsoft Edge gained very little from the demise of Microsoft Internet Explorer. People who have adopted Windows 10 aren’t adopting Edge and are most probably opting to install and use Chrome or Firefox instead. I’ve mentioned it here in the past

What happens between Microsoft Edge and Apple Safari is even more interesting. Apple Safari is falling behind Microsoft Edge:

Something doesn’t add up here.

The Edge numbers should rise a lot higher, due to the successful upgrades we’ve seen for Windows 10 in the market. And it doesn’t. We already noticed how Chrome and to some extent Firefox enjoyed that switch to Windows 10.

I am not sure how the slip of Apple Safari market share from almost 5% in the beginning of this year to below 4% can be explained. Is it due to the slip in Mac sales in recent months or is it people who prefer using Chrome or Firefox on their Macs?

There’s one caveat here of course – these numbers are all statistics, and statistics do tend to lie. When going to specific countries, there will be a different spread across browsers, and to a similar extent, your service sees a different type of browser spread because your users are different. Here’s the stats from Google Analytics for this blog:

For me, it is titled towards browsers supporting WebRTC, and Safari is way higher than Edge and Internet Explorer put together.

Back to WebRTC

Every once in a while, someone would stand up and ask: “But what about Internet Explorer?” when I talk about WebRTC. It is becoming one of these questions I now expect.

Here’s what you need to think about and address:

  • Chrome is probably your go-to browser and the first one to support with your WebRTC product
  • Firefox comes next, and growing. So keep your tabs on it to see how it “performs” with your product
  • Edge. Useless for most. Add support to it if:
    • You do voice only (should work nicely), and you want that extra market share
    • You know for sure your users are on Edge
  • Internet Explorer. Ignore
    • Microsoft probably won’t invest in having WebRTC support in it, so don’t wait for them
    • Use a plugin or whatever if you must
  • Safari. Ignore for now. Nothing to do about it anyway
What’s next?

I am working on a quick cheat sheet for you. One which will enable you to make fast decisions for browser support. It will extend also into apps and mobile. Probably by next week.

Until then, if you plan on picking up browsers to support, think of your target audience first. Don’t come up with statements like “IE must be supported” or “Without Safari I can’t use this technology”. You are just hurting yourself this way.

 

Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.

 

The post Desktop browsers support in WebRTC – a reality check appeared first on BlogGeek.me.

Get Ready for Kranky Geek San Francisco AND São Paulo

Mon, 10/31/2016 - 12:00

Kranky Geek is coming to town!

WebRTC is maturing. We’re 5 years into this roller coaster and it seems most companies have already understood that they need to use WebRTC in one way or another. To many, this is going to be an excruciatingly painful journey. They will need to change their business model, think differently about how they develop products and even rewrite their core values.

One of the reasons we decided to launch Kranky Geek over two years ago was to have a place where developers can teach developers about WebRTC. Somewhere that isn’t already “tainted” with the telecom views of the world – not because they are bad – just because WebRTC can accomplish so much more. What we are going to do next with WebRTC takes place in November and will happen in two separate locations:

Kranky Geek San Francisco

San Francisco is where Kranky Geek started and where I feel at home when it comes to this event. We will be doing our 3rd Kranky Geek event in San Francisco (and 4th in total).

It will take place on November 18, at Google’s office on Spear street.

Our focus this time around is going to be mobile. We’ve got sessions lined up that should cover most of the aspects related to WebRTC and mobile. Things like using React, cross platform development, video compression, specific aspects in iOS as well as specific aspects in Android related to WebRTC.

If you are into mobile development with real time communications – then this is an event you don’t want to pass up.

There is also a new attendance fee that was added – $10 that gets donated to Girl Develop It. You may notice we don’t have a woman speaker this time – it is hard to find women speakers in this domain, so if you are one or know one – make sure to let us know for our future events.

I’d like to thank our sponsors who made this thing possible:

  • Google – who brought us WebRTC in the first place and is instrumental to the success that is Kranky Geek
  • TokBox – sponsoring both the San Francisco and São Paulo events. They will share their experiences with mobile aspects of WebRTC related to Android
  • Twilio – sponsoring both the San Francisco and São Paulo events. Their session in San Francisco will cover WebRTC and the Internet of Things
  • Vidyo – a new sponsor that is joining the Kranky Geek family, and probably the best one suited to talk about real time video compression technologies that make sense in mobile devices
Kranky Geek São Paulo

This will be my first time in Brazil and also the first time we run Kranky Geek in Barzil. As with San Francisco, the event is hosted at Google’s office in São Paulo.

Our focus for São Paulo will be back to the basics of WebRTC. We are trying this time to fill in the gaps – share resources and insights that developers who use WebRTC in their daily activities need. This is why we have a few sessions that are targeted at debugging and troubleshooting WebRTC in this event.

Registration for the São Paulo event is free.

For the São Paulo event, we got the help of a few sponsors as well:

  • Google
  • TokBox – at São Paulo, TokBox will share with us how to deal with device and connectivity issues when it comes to WebRTC sessions
  • Twilio – will be looking at the makeup of a WebRTC service, as the browser implementation of WebRTC is the beginning of the journey only
  • WebRTC.ventures – who are sponsoring this event for the first time, will give the overview and introduction to WebRTC
  • Callstats.io – will explain what you can find in getstats() and how to use it
See you there

I have my own session to prepare for the upcoming Kranky Geek, along with a lot of work to make these two events our best yet. There are also changes and modifications that need to make their way to the website –  but rest assured – these events have great content lined up for you.

If you happen to be in the area, my suggestion is come to the event – it is the best place to learn and interact with people who know way better than I do what WebRTC is in and out.

And if you want to meet me – just contact me. I’ll be “in town” for an extra day or so.

See you all at Kranky Geek!

The post Get Ready for Kranky Geek San Francisco AND São Paulo appeared first on BlogGeek.me.

Quiet please – people are studying

Mon, 10/24/2016 - 12:00

No article today.

My course is launching today: Advanced WebRTC Architecture Course.

I’ve got some solid attendance for it, along with a good bulk of high quality material lined up.

Hopefully, this will be a success.

If you are taking the course – then good luck and please share your thoughts with me – I’ve built this course for you and I’d like you to benefit from it as much as possible.

If you aren’t taking it but still want to attend – feel free to enroll. I’ll be closing up course signups end of this week, with no clear indication if and when I’ll be running it next.

Now quiet please – there are people studying in here. Somewhere. Hopefully.

The post Quiet please – people are studying appeared first on BlogGeek.me.

Pages

Using the greatness of Parallax

Phosfluorescently utilize future-proof scenarios whereas timely leadership skills. Seamlessly administrate maintainable quality vectors whereas proactive mindshare.

Dramatically plagiarize visionary internal or "organic" sources via process-centric. Compellingly exploit worldwide communities for high standards in growth strategies.

Get free trial

Wow, this most certainly is a great a theme.

John Smith
Company name

Startup Growth Lite is a free theme, contributed to the Drupal Community by More than Themes.