bloggeek

Subscribe to bloggeek feed bloggeek
The leading authority on WebRTC
Updated: 1 hour 59 min ago

DIY or SaaS for Your In-App Messaging?

Tue, 03/29/2016 - 12:00

No easy answer.

What route should your messaging implementation take?

If there’s something I like is to write code. I haven’t done so in years, but it still is my passion. A year or two ago, I’ve done a small coding project for something I needed. After a whole day of coding it dawned on me that I haven’t checked my email, social networks or notifications the whole time – and didn’t even miss it. The only thing these days that can focus me on a single task at a time is programming.

When I did develop, and manage developers, there was always that tension of NIH in the air – the Not Invented Here syndrome that we developers are so good at. We want to develop stuff on our own and not “outsource” it to others. Hell – if I wrote a piece of code a year ago it was crap the next year and had to be rewritten.

I had the chance to listen in to Apigee’s recent webcast on Build vs Buy API Management. See it here:

This webcast goes over a lot of reasoning I see going on in any development project when the decision needs to be building build and buy.

The funny thing is that I don’t hear this kind of a discussion enough when it comes to messaging. Somehow, people think it is trivial.

I took a few of the concepts in this webcast, and “translated” them into the realm of build vs but for messaging.

Limited view of the scope

When a project starts, it seems that adding messaging isn’t that hard. You have a bunch of people. Maybe some presence indication. Run around a few Websocket messages for the text involved in the conversation and you’re done.

But is it really true, or is there more to messaging? It is far from trivial. Even simple things like delivering messages while disconnected or handling push notifications are notoriously hard to get right – even for those who should be the experts in it.

When you define what it is you need to build for your messaging, most often than not, you’ll be doing it with the following “mistakes”:

  • You will have a narrow scope of what is really needed
  • You will focus on the functional part of messaging, but probably a lot less of the other requirements (such as a good backend to understand what your system is doing and how people end up using it)

With limited scope comes the challenge of not comparing the right things when deciding between build or buy.

RISK

Every development project is risky. Purchasing an off the shelf solution usually mitigates the risk by having it done by someone else where the payment and deliverables are known in advance.

Developers tend to ignore risk – especially if the project is interesting enough to build. And yes. A distributed, low latency, high efficiency, large scale messaging backend written in Lua or Go is highly interesting.

You are not WhatsApp. Or Netflix

Building your own messaging system is hard. It takes a lot of effort. WhatsApp seems so easy, but getting there is hard.

This shift towards in-app messaging that is occurring means that in most cases, messaging is becoming part of an IT project and not exactly an R&D project. As a company, this means the focus is elsewhere and that messaging is considered a commodity or a non-core technology.

In such cases, there is no real funding for ongoing development, support and maintenance of an in-house DIY messaging framework.

Can open source help?

Sure, but is it at the right level of maturity?

There are a few dozen open source messaging frameworks out there. They probably do the work, but barely.

And the main challenge is that messaging is rapidly changing, which means that whatever is out there today is probably somewhat obsoleted or out of sync with what you need anyway – and getting it to where you need it means more investment on your end. Probably.

To top it all, with most of these open source initiatives, what you’ll find out that they have one main contributor behind them. That contributor is most probably a vendor who is offering support and proprietary modules to take care of commercializing the open source offering. Things like reporting, scaling, maintenance, etc. – all these will fall in the domain of proprietary and payment.

So if the idea from the start was to use open source to refrain from having to negotiate and work with a vendor, where does that lead you down the road? Isn’t it better to acknowledge the fact from the onset and find a suitable solution out of a larger set of available vendors?

Time To Market

I know. I know.

If you write your own messaging system, it will take you the better part of a weekend. Adding a bit of code and stability around it clocks it at a month. Nothing can beat that.

But what is it you are comparing here? Are you concerned about your prototype implementation or is that like production grade we’re talking about?

Getting something to production requires a lot more time.

Why are you even going DIY?

Is it because it will be cheaper?

Because you’ll have more control over your future and destiny?

DIY is going to cost you in time and effort which you don’t necessarily have.

If and when this project of yours going to succeed, you’ll find out that with it more requirements and maintenance work is necessary. But what you’ll also find out is that the budget might not be there for you to handle that extra load in development. You promised the organization a working messaging system, and now that it is working – why are you asking for more funding exactly?

 

 

Easy? Hard? Core? Commodity?

I guess in most cases, deciding to develop your own messaging system requires a very good reason.

At testRTC we had that same need, though slightly different. We needed a way to communicate with the browser machines we’re running. It was all fine and well when the number of machines was rather small and their locations were simple. It became a real headache when we grew bigger and when customers started connecting machines in locations with flaky internet connections. We ended up using integrating one of the realtime messaging players for that purpose – and haven’t looked back at it since.

Messaging might seem easy, but it is pretty hard once you get to the details.

So why not outsource it and be done with it?

The post DIY or SaaS for Your In-App Messaging? appeared first on BlogGeek.me.

Everyone and His Dog is Fixing WebRTC

Mon, 03/28/2016 - 12:00

Enhanced. Fixing. Solving. Enterprise grade. Improving. Completing.

I’ve been seeing this too much lately.

Companies decide to market their product as a way to “fix” WebRTC. The gall.

I understand where this comes from. Marketing is a lot about FUD. How to put fear in your potential customer until the only thing left for him to do is buy.

If you look closely, though, none of them really “fixes” WebRTC. The only thing they are doing is using WebRTC in a way that may fit you as a customer.

An example?

Companies who “fix” WebRTC by adding signalling to it. Or adding authorization. Or having it connect to PSTN.

This isn’t about “fixing”. This is about supporting a specific scenario or feature in a product – not even related to WebRTC itself.

Others “fix” WebRTC by having it work on IE (forcing a plugin on the user or using Flash). Again, less about WebRTC, and more about the use case.

And you know what? WebRTC doesn’t offer notifications either – I am sure you can go ahead and “fix” WebRTC by adding push notifications to your app on top of WebRTC!

WebRTC is a very powerful building block, but that’s about all it is – a building block. You’ll need to add additional building blocks to create a solution with it, so no – you aren’t fixing it – you are just implementing your use case with it.

Please.

Stop fixing WebRTC. It isn’t broken.

Just focus on solving a real world problem for a real customer and be done with it.

The post Everyone and His Dog is Fixing WebRTC appeared first on BlogGeek.me.

Standards are for Losers

Mon, 03/21/2016 - 12:00

They really truly are.

Whenever someone whines to me that WebRTC isn’t a standard yet so it isn’t ready it makes me laugh. Who the hell cares about such a thing anymore?

The standard is whoever’s got the clout and strength in the market. Ask any marketer – would they want to be able to interact with the carrier’s standardized, federated (and almost non-existent) RCS client to send a message – or would they rather be able to interact with WhatsApp users. The answer, for countries where WhatsApp is popular will be WhatsApp. Marketers don’t care about the standard. The users don’t care about the standard. And most developers don’t care either – as long as the interface is adequately documented.

Enter WebRTC.

No. The IETF hasn’t gone through the motions and finalized the spec yet.

Yes. It might change.

No. I couldn’t care less.

You see, there are already billions of users available to me via WebRTC. There’s source code I can take, compile and run anywhere I want. There’s a vibrant ecosystem of developers and vendors ready to assist. There’s a large and growing number of companies and use cases that make use of WebRTC.

Who am I to say that WebRTC doesn’t exist because someone didn’t put their “standard” stamp on it?

For the last 3 years I’ve been using WebRTC almost daily to communicate with others using various services. I didn’t think for once that this isn’t working because there’s no standard.

Whenever companies band together to create a standard, I begin to question their motive. These days, it usually comes from a point of weakness – a place where there is one (or more) vendors who are strong in a domain and the only way the smaller kids can have a go at it is by specifying a standard to rally all small players to fight the dominant force.

Whenever you see a standard being announced – ask who isn’t there – that’s the one with the power.

In the case of codecs, the MPEG-LA asserts its power and dominance over H.264 and H.265/HEVC for video codecs. Which is why the aomedia was created and announced – to find an alternative codec and win the market back.

The examples are countless.

In the domain of real time communications, everyone were using H.323 or SIP. Then Skype came out, ignoring standards altogether. The industry tried its best to explain that Skype isn’t federated. There’s no standard there. To no avail. So companies (the same ones) tried connecting to Skype, to offer that as part of their service.

The same is happening today with WhatsApp and other social networks. They are so big, that they are the standard.

WebRTC is making the same distinction. It is taking away the hegemony on VoIP from VoIP vendors and putting the weight of this industry on the browser vendors. And now, these vendors are complaining that WebRTC isn’t interoperable. Doesn’t fit their needs. They don’t understand that they are neither in control here nor influencers. They lost control over that part of technology.

This isn’t to say that WebRTC won’t stabilize or get standardized – it is just that it doesn’t matter when it comes to adoption.

Standards? They are for the losers to run after to make sure they get to play the game. The winners don’t really need them.

Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.

The post Standards are for Losers appeared first on BlogGeek.me.

WebRTC is a Distraction

Mon, 03/14/2016 - 12:00

Had to take this one out of my system.

Just in time for Enterprise Connect, Dave Michels decided to write a post to attract readers. The title? WebRTC is a distraction. It is hard to pin point what’s wrong with the arguments in this one, but most of them are just lacking in knowledge or understanding of this market and how it operates, which is sad – especially coming from Dave who I value very much.

The 4 main reasons why it is a distraction for Dave?

  1. Limited support
  2. Mobile is what really matters
  3. Why bother?
  4. WebRTC is dangerous

Let’s try to dismantle each of these so called arguments one by one. Shall we?

#1 – Limited Support

WebRTC today runs on Chrome and Firefox. Microsoft went for ORTC (=WebRTC) and is now “considering” WebRTC as well.

Apple isn’t there, but frankly – I almost never hear complains about Safari not having WebRTC. For some reason, Mac uses have been trained to use Chrome when needed. Furthermore, there’s work been done at Apple about WebRTC, if you care about rumors.

Add to that the fact that no other solution runs on a browser. No other. None. Zilch. They are all getting thrown out from browsers who are stopping support for plugins, Java and probably Flash in the future. And what else have this amount of support anyway?

Now, you can use WebRTC as a desktop app, using a plugin, through Java – or in whatever other manner people use their comms today – so that limited support is wider than any other alternative to date.

#Doesn’t work for you? Don’t use it. But don’t complain that others are using it and are happy about it.

#2 – Mobile is what really matters

To whom?

And while at it, using WebRTC inside an app makes a lot of sense. You shouldn’t care about the technology – just your customers. If they want apps, give them apps. Wrap WebRTC and be done with it.

There’s no other serious media engine for mobile that can be considered – the price point for it will be too prohibitive as well as the investment made.

Mobile is what really matters, which is why Facebook Messenger uses WebRTC. In both mobile and desktop. And is probably larger in deployment, users, minutes, seconds and engagement than anything else the unified communications market has to show for its huge success in its 10+ years of existence.

You know what? I am tired of waiting for unified communications to happen. It is time we take matters into our own hands (with WebRTC) instead of waiting for these large stale companies to move at a reasonable pace and come up with a workable solution.

#3 – Why bother?

Dave says Google no longer cares or invests in WebRTC. I’d say this can’t be further away from the truth.

Google are heavily invested in WebRTC today, based on the number of new features and changes they bring with every new version of Chrome (which happens every 6-8 weeks as opposed to 12-18 months of the slow vendors Dave asks us to put our trust in).

The pace of change for WebRTC is staggering. Nothing comes close to it.

In the span of a year, we’ve seen the echo canceler getting replaced in WebRTC, VP9 introduced, H.264 is underway, ORTC related APIs getting added and that’s just what I can remember off the top of my head (and really took place in the last couple of months only).

Will Google continue at these breakneck speed? Who knows? For now, I’ll take what I am given – especially for free.

#4 – WebRTC is dangerous

Not sure where to start here.

With Unified Communications and its current cadre of vendors, the issues raised by Dave (things you don’t understand and control coupled with hard to patch and upgrade) are a lot more dangerous.

Do you know when your PBX was upgraded last for that critical security issue it had? Do you even know if it was upgraded at all? What about the router you have at home? This FUD about security in WebRTC wreaks of misundersanding of the technology.

We are living in a world where we move everything to the cloud and our mobile devices. In such a world, security needs to be taken seriously. Not by introducing stupid proprietary solutions that are hard to manage or maintain, but rather by introducing cloud based solutions that can upgrade and update automatically. Ones where security is taken into account from the ground up and not as a bolt on feature to show the buyer.

WebRTC has all that and more, so if you think WebRTC is dangerous – sure it is. To anyone who is trying to compete against the companies using it. In the long run, resistance is futile.

The truth of it

Google doesn’t care about the unified communication market when it comes to WebRTC.

They just couldn’t care less if this does headaches to Cisco or Polycom or anyone else in this market. The way vendors are bitching about WebRTC shows how they view VoIP and UC as their own, as if they are entitled to what goes on there and as if someone needs to think about their business models and legacy deployments so they don’t get hurt.

Get over it.

WebRTC is a huge distraction to those who aren’t built to embrace it. They are going to fade away. Just a matter of time. And Dave – you won’t need to wait much longer for it to happen.

 

[show promotion title=”strategy-session”]

The post WebRTC is a Distraction appeared first on BlogGeek.me.

Developer Ecosystem Acquisitions Makes Build vs Buy Decisions Harder

Thu, 03/10/2016 - 12:00

Who do you go to with your WebRTC needs?

That moment you realized you selected the wrong vendor

There are now over 20 vendors out there offering WebRTC APIs in the cloud.

20.

How the hell do you decide which one to pick for your service?

This question was rather “simple” to answer, but it is getting harder.

Two months ago, Facebook decided to shutdown Parse. This is something that should not be taken lightly.

In 2013, Facebook acquired Parse. Parse was a MBaaS(mobile backend as a service platform). If you want to build a mobile app, you’ll be needing some backend in high probability – a place to store account information, maybe sync data between users, etc. MBaaS does exactly that, and in this domain, Parse was one of the bigger platforms. They had around 60,000 applications on their platform at the time of acquisition – not something to take lightly.

Facebook didn’t acquire Parse for its great technology but rather for its developer ecosystem – for its popularity. In the two years since, Facebook invested more in the platform – just so it can close it.

In the context of communication API platforms with WebRTC capabilities, what we’ve seen so far are two kinds of acquisitions:

 

  1. Acquiring a technologySnapchat acquiring AddLive, Requestec getting acquired by Blackboard are such examples. So is Crocodile RCS acqisition by Acision and then Acision wrapped into Xuar
  2. Acquiring a developer ecosystemTokBox’s acquisition by Telefonica and the recent Cisco acquisition of Tropo

Will Cisco decide in a year or two to shutter down Tropo if it doesn’t bring the traction it wants or if it serves its purpose of getting enterprises to adopt Cisco Spark?

Would Telefonica stop investing in TokBox? Highly unlikely after 3 years, but who knows? I wouldn’t have bet on Facebook shedding Parse.

The thing about Parse is that Facebook didn’t even spun it off again – or sold it. It just closed the service. More akin to how Snapchat treated its own acquisition of AddLive.

Kin Lane explains nicely the false expectations people had from Facebook and Parse:

There is no basis for believing a platform or API will ALWAYS be there, no matter what you are promised. Companies go out of business, get acquired, and in this fast paced tech climate, companies are always looking to deliver the latest product, and features. Everything in the space points to disruption, change, and evolution, where the hell did we get the idea these services shouldn’t go away?

What can we deduce?
  1. Platforms with large ecosystems aren’t impervious to being taken off market. TokBox may get shuttered. Twilio might get acquired
  2. In the build vs buy decision of WebRTC, using a platform doesn’t mean write once and forget. You may need to update your code, switch vendors, etc. – be ready for it

As I start working on another update for my Choosing a WebRTC API Platform report, I will take the time to research the reasons for vendors selecting the less popular API platforms – what makes them take that plunge. If you are such a vendor – contact me.

Until this new update gets released (April-May timeframe), there’s a $700 USD discount on the report (which includes a 1-year update period).

The post Developer Ecosystem Acquisitions Makes Build vs Buy Decisions Harder appeared first on BlogGeek.me.

WebRTC Multiparty Video Alternatives, and Why SFU is the Winning Model

Mon, 03/07/2016 - 12:00

It’s the money stupid.

We all love to hate the model of an MCU (besides those who sell MCUs that is).

There are in general 3 main models of deploying a multiparty video conference:

  1. Mesh – where each participant sends his media to all other participants
  2. MCU – where a participant is “speaking” to a central entity who mixes all inputs and sends out a single stream towards each participant
  3. SFU – where a participant sends his media to a central entity, who routes all incoming media as he sees fit to participants – each one of them receiving usually more than a single stream

I’ve taken the time to use testRTC to show the differences on the network between the 3 multiparty video alternatives on the network.

To sum things up:

  • Mesh fails miserably relatively fast. Anything beyond 3 isn’t usable anywhre in a commercial product if you ask me
  • MCU seems the best approach when it comes to load on the network
  • SFU is asymmetric in nature – similar to how ADSL is (though this can be reduced, just not in Jitsi in the specific scenario I tried)

This being the case, how can I even say that SFU is the winning model for WebRTC?

It all comes down to the cost of operating the service.

Here’s what an MCU does in front of each participant:

How media gets processed by an MFU

Here’s what an SFU does in front of each participant:

How media gets processed by an SFU

To make things easy for you, I’ve marked with colors varying from green to red the amount of effort it puts on a CPU to deal with it.

The most taxing activity in an MCU is the encoding and decoding of the video. With the current and upcoming changes in video and displays, this isn’t going to lessen any time soon:

  • Google just switched to VP9, which takes up more CPU
  • 4K displays and cameras are becoming a reality. 8K is being discussed already. This means 4 times the resolutions of full HD

If anything – things are going to get worse here before they get any better.

It is no surprise then that MCUs scale on single machines in the 10’s of ports or low 100’s at best; while SFUs scale on single machines in the 1,000’s of ports or low 10,000’s.

Which brings us to two very important aspects of this:

  1. Price per port, where an SFU will ALWAYS be lower than MCU – by several factors
  2. Deployment complexity

The first reason is usually answered by people that if you want quality – you need to pay for it. Which is always true. Until you start reminding yourself that video calling today is priced at zero for the most part.

The second reason isn’t as easy to ignore. If you aim for cloud based services needing to serve multiple customers, your aim is to go to 10,000 or more parallel sessions. Sometimes millions or more. Here would be a good time to remind you that WhatsApp crossed the billion monthly active users and most messaging services become interesting when they cross 100 million monthly active users.

With such numbers, placing 100 times more machines to support an MCU architecture instead of an SFU one is… prohibitive. There are more costs that needs to be factored in, such as power consumption, rack space and higher administration costs.

The end result?

An SFU model is by far the most popular deployment today for WebRTC services.

Does it fit all use cases? No

Will it fit your use case? Maybe

Do customers care? No

 

Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.

The post WebRTC Multiparty Video Alternatives, and Why SFU is the Winning Model appeared first on BlogGeek.me.

Stop Whining about WebRTC Security Threats

Thu, 03/03/2016 - 12:00

It is a waste of time.

I’ve heard it more than one.  Security threats in WebRTC make it a bad alternative. You have MITM (man in the middle) attacks on it. It leaks IP addresses. You can screen share without the user’s knowledge. The list goes on.

It isn’t the first time I write about WebRTC security and it still pisses me off when I see such answers on Quora:

The WebRTC plugin (which means Web Real-Time Communication) allows to conduct audio and video teleconferencing just in a browser without any additional software installed. However, it reveals the true IP address. How to disable WebRTC in various browsers.

A few things about that one:

  1. WebRTC isn’t a plugin…
  2. Why would you want to disable it?

If you trust Skype or any other VoIP or messaging app more, then you are in for a big surprise.

I read the above Quora answer on the same day I read Troy Hunt’s piece on controlling a Nissan remotely – one that… well… isn’t YOUR Nissan.

The things Nissan got wrong here includes:

  • Having cars get sequential serial numbers, so they are easy to guess
  • Having an undocumented backend API that controls cars remotely – with no authentication on it

I don’t want to go into additional measures they could have added such as geolocation for the origination of the command or throttling to bar hackers from going berserk on their car fleet.

What would a leaked IP address on a WebRTC session in a browser do exactly compared to such stupidity?

The bane of security is developers and processes.

IOT (Internet of Things) is going to bring us many more such stories. That’s because it is based on developers and they make mistakes. Increase that a thousand fold, put it in a heating market where features and gadgets take center role, pushing back privacy and security – and you get hackable cars.

Telephony and video conferencing systems or old are devices sitting in networks. They need to “interoperate”. They have IT people who like controlling how things get deployed and updated. Are you sure these have been configured to work encrypted (I am sure most deployments aren’t). Are you sure the IT person really upgraded to the latest version that patches a bunch of security flaws?

And while we are talking about communications. The router you have at home that gives you WiFi on one end and connects you to the internet via ADSL or whatever on the other end – when did you last upgrade its firmware? Did you ever updated its password from the default? Is your service provider taking care of these things for you by any chance?

Here’s why:

  • It is encrypted. By default. And there’s no way to remove that encryption from occurring (people complain about that one as well – go figure)
  • It gets updated every 6-8 weeks with your browser. That update includes security patches when they are found
  • It now forces (at least on Chrome) the sites using it to run over HTTPS instead of HTTP (did we say encryption?)
  • It has permission mechanisms around camera and microphone access
  • It has stricter permission mechanisms around screen sharing (white listing and extensions)
  • Whenever someone peeps about security – it gets discussed and potentially updated in the implementation. Which gets to your browser in… 6-8 weeks
  • Being a part of Chrome and other browsers means security gets front row and is prioritized properly

Yes. Developers can still do stupid things on top of WebRTC and botch it all, but that’s true about that snazzy new car you just bought or the smart TV that looks at you and hears what you say.

What more do you want?

If I wanted to hack you, WebRTC would be the last place I’d start.

The post Stop Whining about WebRTC Security Threats appeared first on BlogGeek.me.

Does Google’s Support of RCS Changes Anything for WebRTC?

Mon, 02/29/2016 - 12:00

No.

Now that we got that one out of the way, lets see why the recent announcement from Google and the GSMA isn’t relevant to WebRTC.

On February 22, the GSMA issued a press release titled Global Operators, Google and the GSMA Align Behind Adoption of Rich Communications Services. The subheading sums up the message:

Operators align on universal RCS profile; Google to provide RCS messaging client in Android

I was asked if this kills WebRTC – and the efforts of companies invested in WebRTC already.

There are two ways to view these questions:

  1. People don’t understand what WebRTC (or RCS) is
  2. People are just afraid of Google deciding on a whim to close WebRTC as just another experiment (think Google Reader, Wave, Buzz and a lot of other technologies and services in the Google graveyard)
Nothing really changed

I’ve written about the Google’s acquisition of Jibe. Nothing changed since then. I then assumed that Telcos will accept this and adopt it.

The recent press release shows that that has happened – at least by the GSMA. Time will tell which of the carriers will join this initiative.

I am not sure it will save RCS, but as I still believe it is the only alternative that brings RCS any future.

How is that different than WebRTC?

When I think about RCS, I think signaling, messaging and federation. It is about serving all people with a mobile device.

When I think about WebRTC, I think about media processing, business enablement. business processes and customizaton.

RCS isn’t about to win back the world in storm. It won’t beat WhatsApp or Facebook Messenger or WeChat or any of these other players any time soon. And if it does, it won’t be useful for most use cases I’ve seen with WebRTC anyway.

While both RCS and WebRTC can now be said to be promoted by Google, they aren’t serving the same needs in Google.

Will Google stop supporting WebRTC?

I don’t think that’s a possibility in the foreseeable future. How much investment will it put on WebRTC is another topic.

WebRTC is now part of HTML5. It is implemented by Google, Mozilla and Microsoft (don’t start with me on ORTC here please). Rumors abound about Apple, but I don’t really care at this point.

Google dropping WebRTC means back to plugin realm for things like Google Hangouts. And for things like RCS.

When you want to implement an RCS client on a browser, and initiative a voice call through it. From inside the browser. What are you going to use for it? Flash?

Google needs to continue its investment in WebRTC as long as it feels it needs Hangouts as part of its strategy. Messaging is  important to Google – check out their investments and acquisitions around messaging vendors. To that end, it can’t just drop WebRTC.

If, on the other hand, WebRTC gets to a point where it is good enough for Google, its investment in it may change. Until all browsers support WebRTC reasonably – there’s no threat of this happening.

The post Does Google’s Support of RCS Changes Anything for WebRTC? appeared first on BlogGeek.me.

Join me in London for WebRTC Global Summit

Sun, 02/28/2016 - 14:00

Why don’y we meet in London on April?

It is that time of year. Informa is doing their annual WebRTC Global Summit in London on April.

This year, there are three tracks going on: Telecom, Developer and Enterprise

As with last year, if you arrive early (=for the weekend), you can also attend the TADHack event that is taking place.

I am chairing the developer day along with Chris Khoencke, we. We’ve worked hard to bring you some interesting topics and fresh new content.

While the developer day is free to attend, the rest of the conference is something I am waiting for as well.

When? 11-12 April

Where? Cavendish Conference Centre, London, UK

Free registration here

I will speak about two topics during the event:

  1. Video codecs and WebRTC
  2. Testing challenges with WebRTC

If you plan on attending or are just in town, then make sure to contact me in advance or just come say hi when you see me at the conference.

 

The post Join me in London for WebRTC Global Summit appeared first on BlogGeek.me.

SoftBank’s Adoption of WebRTC Should be a Wake Up Call to Video Conferencing Vendors

Thu, 02/25/2016 - 12:00

Wake up and smell the ashes?

This week, as part of the slew of announcements of MWC, there was this one – SoftBank Deploys Large-Scale WebRTC-Based Conferencing Application Enabled by Dialogic. From the press release:

SoftBank Corp. has selected Dialogic® PowerMedia™ XMS software media server as a core network element of their new multimedia web conferencing solution, supporting SoftBank’s enterprise collaboration needs for video conferencing and chat room capabilities. The WebRTC-based web conferencing application will replace aging legacy video equipment and services for employees across their various divisions and brands.

The emphasis is mine, so lets unravel it a bit.

  • Dialogic PowerMedia XMS is a media server for developers
  • Video conferencing in enterprises was something you purchase not something you develop
  • But something is changing
  • Fidelity in the US acquired Vidtel a few years ago to get in-house the ability to build their own video conferencing capabilities
  • SoftBank is doing the same now by licensing PowerMedia XMS and probably some other tools from other vendors
  • To top it off, it is transitioning from “legacy video equipment” (=video conferencing vendors) to an in-house solution

Microsoft Skype? Cisco Telepresence? Or Spark? Polycom?

No. Just WebRTC. With their own logic and implementation.

It is not only verticals

If you asked me in 2015, I’d have said that video conferencing has its place, but it is now limited to the enterprise. Finance, Retail, Contact centers, healthcare, education – all these now have their own specialized vendors offering WebRTC solutions that are a lot more focused on the business of the vertical than a generic video conferencing vendor can ever be. It was easy to see why these verticals are heading away from video conferencing towards WebRTC vendors.

But video conferencing?

And without even a vendor?

DIY?

Unheard of!

But SoftBank is now doing it.

Why is it important?

The value of video conferencing in its generic unified communications form is diluting.

It is no wonder that Polycom closed its office in Israel and many of the other players of this market are struggling to grow. The future ahead of a legacy video conferencing vendor is murky. If I were working in that market – I’d be worried. Very worried.

SoftBank is just another instance of the tectonic shift taking place – the change in guard in communications that is happening all around us.

 

 

The post SoftBank’s Adoption of WebRTC Should be a Wake Up Call to Video Conferencing Vendors appeared first on BlogGeek.me.

The Biggest Risk of Building a Business over Messaging Platforms

Tue, 02/23/2016 - 12:00

Do you really want to trust a messaging platform to be there tomorrow as well?

Building house of cards on top of Facebook?

Facebook just killed Parse. A successful mobile BaaS platform they acquired in 2013. There’s a nice round up of feedback about it on Business Insider.

Inside the span of the same year, Facebook also announced the ability for businesses to integrate with its messaging platforms (both Messenger and WhatsApp).

It is funny somehow. The Business Insider article indicates Orbitz being one of Parse’ customers. I wonder how willing they will be to use another Facebook API to drive their messaging in front of their own users.

Here’s the thing. Messaging platforms are about messaging platforms. Most of them, don’t really care about the ecosystem of developers being built around them.

Twitter is famous for closing doors on developers. In 2012, it changed its rules around APIs, limiting access in a way that virtually killed any possibility to develop alternative Twitter clients.

What are we left with? The simple fact that relying on a single messaging platform and its API access for your service and business model is risky at best. Probably suicidal.

There’s a shift happening in the world. It started somewhere in the dot com bubble, morphing every couple of years:

  • Websites
  • Mobile Apps
  • Messaging

Websites was easy. With access to the internet, everyone could be doing anything. There were no real gatekeepers, besides Google and its search engine – but that’s a rather “soft” sort of a gatekeeper – you could succeed without it (ask Facebook or Twitter).

Then we started the great migration towards mobile and applications. We were left with two gatekeepers – Apple and Google. Apple with its inconsistent and somewhat puritan approval rules, and again Google. Now if you want to reach out to users, you go through these companies, who hold the keys to that kingdom.

Recently, it started changing, with a migration happening towards messaging apps. With billions of users interacting through messaging, these are turning into platforms of interaction – places where businesses, virtual assistants and bots can interact with the users of the platform.

The difference now, is that these messaging platforms have a lot more control over the users who end up using them – and by extension, over the enterprises who integrate with their service.

My suggestion?

If you need messaging in your service, build it your own unless “socializing” and communicating directly with specific social networks add some huge benefit to you. The risks are just too great to be worth it.

 

Kranky Geek India takes place in Bangalore on 19 March 2016. Register to join us!

The post The Biggest Risk of Building a Business over Messaging Platforms appeared first on BlogGeek.me.

Different Requirements of Scaling real time video

Mon, 02/22/2016 - 12:00

There’s scaling and then there’s scaling.

The post from last week about the future of WebRTC live broadcast left some interesting impressions. Comments on that post and in Facebook. Red5 even did a follow up post on it.

One thing that was missing from these comments is an understanding of what scale means. Or rather the different types of scaling that are required when it comes to real time video.

Here are a few different aspects of scaling real time video.

#1 – Streams per machine

This is something that was raised on one of the comments on Facebook:

Most of the SFUs out there can actually handle 100’s and even 1000’s of connections (our data is not public but look at JVB:https://jitsi.org/Projects/JitsiVideobridgePerformance) and with most of them it should be possible without much effort to configure multiple SFUs in cascade to scale almost without any limit in my opinion.

That answers the question how many parallel sessions can you conduct on a single machine?

What is this one good for?

When you know how many sessions / streams you plan on having, you can then calculate how many machines you’ll need to run that scenario. From there, it is easier to extrapolate costs.

But that’s not our only vector of scale.

#2 – Streams per session

How many streams can we “bundle” per session?

In the comment above, what was failed to be mentioned was that these tests of 100’s and 100’s of connections were when each session had no more than 33 streams in it. So if what I want is to live broadcast a singer to 1000’s of viewers in real time – this SFU solution won’t be suitable for my need.

It is nice to be able to do multiparty video or to broadcast live with low latency, but always ask yourself – what’s the upper limit here for this single session? How many participants can I cram into that session without making things impossible on my infrastructure?

There are, in general, two critical challenges here:

  1. When the number of users per session grows, the amount of communications between peers should be limited. At the extreme, a broadcaster should not be harassed by viewers directly (which is wher e the SFU starts breaking at scale and why I assume Jitsi preferred not to check above 33 participants)
  2. When the number of users per session grows beyond a single machine, how does that compute? You’ll need to be able to distribute the session somehow either by cascading or using some other means of architectural magic

It is also worth pointing out that the larger the group, the more fragmentation issues you’ll have across parallel sessions – if the size of a session is dynamic, then on what kind of a machine should you start it? One which is free or one which is already somewhat busy? Can you dynamically route a session to other machines when the need arise? How do you load balance this?

#3 – Failure diffusion

This one is related because the higher the scale and capacity, the more of an issue this will be.

Let’s assume we can get a machine to run 10,000 streams in parallel. I am optimistic today. Let’s also assume that this all happens in a single process running in our machine.

What happens if there’s a bug somewhere (and believe me – there already is), which happen to cause the system to crash? Whenever we hit the bug, 10,000 streams get disconnected.

Now let’s further assume that each session holds 10 streams on average. And the bug was invoked due to one of these streams doing something slightly unorthodox. Now we have one session causing the disconnection of 999 more sessions on that machine.

Which leads us to the question –

Can I run multiple processes on the same machine, each catering a smaller number of sessions? Maybe even only a single session? How does that impact memory and performance? Is it even desirable?

For some, this might be necessary in their architecture – and it is very far from how telecom services are architected…

When Talking About Scaling…

Make sure you refer to the specific aspects you wish to scale.

 

Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.

 

The post Different Requirements of Scaling real time video appeared first on BlogGeek.me.

The Future of WebRTC Live Broadcast

Thu, 02/18/2016 - 12:00

It is in the viewer side.

Live broadcast is all the rage when it comes to WebRTC. In 2015 it grew 3-fold. It is a hard nut to crack, but there are solutions out there already – including the new Spotlight service from TokBox.

WebRTC Live Broadcast Today

If you look closely, most of the deployments today for live broadcast using WebRTC look somewhat like the following diagram:

How you live broadcast using WebRTC today

What happens today, is that WebRTC is used for the presenter – the acquisition of the initial video happens using WebRTC – just right to the broadcast server. There, the media gets transcoded and changes format to the dialects used for broadcasting – Flash, HLS and/or MPEG-DASH.

The problem is that these broadcast dialects add latency – check this explanation about HLS to understand.

With our infatuation to real time and the strive of moving any type of workload and use case towards real time, there’s no wonder that the above architecture isn’t good enough. With my discussions, many entrepreneurs would love to see this obstacle removed with live broadcasts having latency of mere seconds (if not less).

The current approaches won’t work, because they rely heavily on the ability to buffer content before playing it, and that buffering adds up to latency.

WebRTC Live Broadcast Tomorrow

This is why a new architecture is needed – one where low latency and real time are imperatives and not an afterthought.

Since standardization and deployment takes time, the best alternative out there today is utilizing WebRTC, which is already available in most browsers.

How WebRTC live broadcast will look like tomorrow

The main difference here? The broadcast server needs to be able to send WebRTC at scale and not only handle it on its ingress.

To do this, we need a totally different server side WebRTC media implementation than the alternatives on the market today (both open source and commercial).

What happens today is that WebRTC implementations on the server are designed to work almost back-to-back – they simulate a full WebRTC client per connection. That’s all nice and well, but it can’t scale to 100’s, 1000’s or millions of connections.

To get there, the sever will first need to split the dependency on the presenter – it will need to be able to process media by itself, but do that in a way that optimizes for large scale sessions.

This, in turn, means rethinking how a WebRTC media stack is architected and built. Someone will need to rebuild WebRTC from the ground up with this single use case in mind.

I am leaving a lot of the details out of this article due to two reasons:

  1. While I am certain it can be done, I don’t have the whole picture in my mind at the moment
  2. I have a different purpose here, which we are now getting to
A Skillset Issue

To build such a thing, one cannot just say he wants low latency broadcast capabilities. Especially not if he is new to video processing and WebRTC.

The only teams that can get such a thing built are ones who have experience with video streaming, video conferencing and WebRTC – that’s three different domains of expertise. While such people exist, they are scarce.

Is it worth it?

Optimizing down from 20 seconds latency to 2 seconds latency. That’s what we’re talking about.

Is investing in it worth the effort? I don’t have a good answer for this one.

 

Planning on introducing WebRTC to your existing service? Schedule your free strategy session with me now.

The post The Future of WebRTC Live Broadcast appeared first on BlogGeek.me.

Are WhatsApp and Messenger competitors or partners in Facebook?

Tue, 02/16/2016 - 12:00

Two messaging services. Focused on consumers. Doing practically the same thing. Do they compete or cooperate under Facebook’s roof?

Messenger and WhatsApp are the biggest messaging platforms toady. Messenger announced 800M monthly active users recently, while WhatsApp celebrated hitting the 1 billion mark. As they both strive to continue with this rapid growth, I have to question – are they joining forces or competing fiercely between themselves.

The reason I raise it stems with how they implemented web support and VoIP:

  • Messenger unbundled from Facebook, opening its own independent site, which acts as a full messenger client. If you want to make calls, you use WebRTC for that
  • WhatsApp created a web frontend tethered to the phone app. It cannot work without the phone nearby. And when it comes to VoIP, it might be using the same codecs as WebRTC, but not the vinyl implementation

They are taking different architectural approaches. But they end up implementing the same feature set.

WhatsApp in 2015

Here’s what WhatsApp did or was rumored to be working in the last year:

Messenger in 2015

Here’s what Messenger did in the last year:

 

Not much of a difference…

Running such a thing at scale of 100’s of millions of people is painfully hard. Doing that twice under the same roof is even harder:

  • It seems like they develop everything twice or separate infrastructure and architecture.
  • There’s no federation between the two – you can’t send a message from a Messenger user to a WhatsApp user – even though both belong to the same company

Where would each of these services go next for growth?

The above slide from eMarketer shows how in some countries, the main competitor of WhatsApp is Facebook Messenger – and vice versa. I think each of them tries independently to raise his users base – with no real regard of the other’s footprint at any given location.

This one from Activate goes to show how growth for both these platforms come from the same areas – and where they overlap or compete on the same set of users.

Something doesn’t work out here for me, though it is hard to lay a finger on it.

WhatsApp is probably still a strange bird in Facebook, far from the rest of the company and its DNA. Getting it in line with Facebook will take considerably more time.

 

The post Are WhatsApp and Messenger competitors or partners in Facebook? appeared first on BlogGeek.me.

Would WebRTC be as Big a Thing if it Didn’t Run in a Web Browser?

Mon, 02/15/2016 - 12:00

Probably not.

I wrote about Peer-to-Peer and WebRTC recently, and got this interesting question due to it from Fabian Bernhard on LinkedIn:

Without arguing about the quality of a specific Open Source media stack, would you say that WebRTC was as big a thing if it didn’t run in a web browser?

I guess the answer is no it wouldn’t be that big a thing.

Here’s where I am getting at it. There are two popular slides I usually use:

The one above explains that WebRTC sits at an intersection – it appeals both to VoIP people as well as to Web people.

The second slide above is about what makes WebRTC so transformative – it is about the fact that it is Free, but also because it is available for Web people.

Without the web browser part, we would have been left with only Free.

We’ve had open source media engines before. GStreamer is a popular one. Codecs were a bit harder to come by – especially those that don’t require patent payments (royalty free). It wasn’t the best thing out there, but it worked – people still use it today.

WebRTC made the open source version of a media engine as good as a commercial one – it came out of an acquisition of a commercial media engine vendor after all.

But that’s where it stops – it wouldn’t have made such a transformation in the market – it would be more of the same with a small evolutionary step. Nothing to write home about.

The browser bit, though… that made VoIP available and open to everyone with some HTML and JS experience – a lot larger pool of talent – and one dabbling a lot in experimentation. This is what got us so many use cases.

Mobile might be different

For mobile only use cases, WebRTC would have made all the difference – same as it does today. The idea behind it in mobile isn’t that it offers a browser experience or that it is available in the browser (it isn’t on iOS). The idea is that it would have been the cheapest route to a product than anything else out there. And with the trend of communications moving in-app, that would still make the impact it does there relevant.

Which brings us full circle.

Let’s assume mobile is eating up the world. Let’s assume it is only a matter of time until content creation and not only content consumption moves from the PC to mobile. Once that happens – who cares about what happens in the browser?

It will all be in-app anyway.

And there – WebRTC is making a difference.

 

Kranky Geek India takes place in Bangalore on 19 March 2016. Register to join us!

The post Would WebRTC be as Big a Thing if it Didn’t Run in a Web Browser? appeared first on BlogGeek.me.

WebRTC Use Cases

Thu, 02/11/2016 - 12:00

WebRTC use cases? An endless list of opportunities.

Here are a few, off the top of my head, of use case I’ve came across in the past year or so, where WebRTC was used or seriously planned to be used.

  • Simple video chat
  • Web conferencing
  • International voice calling
  • Receiving a call in the browser
  • Hospital clowns
  • Performing digital art on stage
  • Visiting a museum at night
  • Praying
  • Porn
  • Adult gaming
  • Gaming
  • Doctor visitation
  • Group therapy
  • Jail visits
  • Banking
  • Retail
  • Drug prescription
  • Document signing
  • Live broadcasting
  • Radio stations
  • Music jams
  • Karaoke
  • Gym classes
  • Dance classes
  • Teaching and learning online
  • Expert consulting
  • Contact centers
  • CRM integration
  • Job interviews
  • Virtual classes
  • One on one tutoring
  • Web meetings
  • Webinars
  • Dating
  • Video streaming
  • P2P CDN
  • Private messaging
  • File sharing and sending
  • Assisting hard of hearing people
  • Assisting the blind
  • Assisting people who need live translation
  • Language learning from a tutor
  • Practicing language with native speakers
  • Security cameras
  • Collecting sensor data

Did I miss any WebRTC use cases? Definitely.

What will you do with WebRTC today?

And if you built anything – might as well publicize it on the WebRTC Index.

The post WebRTC Use Cases appeared first on BlogGeek.me.

What’s the Size of Your Messaging app?

Tue, 02/09/2016 - 12:00

Not too big, but not small either.

Here’s a shocker – Facebook Messenger has been updated 19 times on Android in 2016. WhatsApp has had 25 releases in the same time span. And we’re not even in the middle of February.

We are talking about the two messaging applications with the largest number of monthly active users, with WhatsApp surpassing the one billion milestone. gulp.

Should we place messaging apps under weight watchers?

To deliver an app that weighs 26 MB to a billion people (I am thinking WhatsApp here), you end up sending over 23 petabytes of data (translation: a shitload of bits). Doing that 25 times since January 1st…

I took a stab at looking into the consumer messaging apps (some of the enterprise ones are larger, though less frequently updated). Here’s what I found:

#1 – They are all fattening up

The scatter graph above is a bit scattered, but it is easy to see that most apps are increasing in size over time. Since September 2014 until January 2016. They all migrated from the 10-20 MB sizes into the 20-40 MB sizes. That’s a doubling in their weight in less than two years.

We don’t think about it much, but we’re in a serious need of a diet here:

  1. This loads our networks. Not as much as video traffic, but still significant
    • Most users have more than one such app on their phone
    • These apps update frequently
    • It adds up
  2. With WhatsApp reaching the one billion mark, where will it be headed next?
    • To maintain its growth it needs to search for additional users
    • These need to come from developing countries
    • And there, bandwidth and data is scarce
    • The smaller the app, the easier it is on users to handle
  3. Most of the messaging apps don’t seem to care about how fat they are
#2 – Size doesn’t equate feature richness

The bar chart above shows how big the latest version of each of these messaging apps is.

Te results are rater surprising:

  • Skype is the bloated of them all at this point in time, but I don’t remember anything new or interesting that Skype on Mobile introduced in the last two years. And yet – it managed to double its size
  • Those in the vicinity of one billion users/downloads are trying to stay on the skinny size – Facebook Messenger, WhatsApp and Hangouts are all rather small compared to the rest of the pack – and somehow, Facebook Messenger is even smaller than WhatsApp (I’d expect it to be the opposite)
  • WeChat and LINE, which can be seen as e-commerce platforms are larger than most, but somehow Skype and Viber manged to be even bigger

I wonder when a diet will be called for. And maybe it already is.

 

Kranky Geek India takes place in Bangalore on 19 March 2016. Register to join us!

The post What’s the Size of Your Messaging app? appeared first on BlogGeek.me.

Are You Using AppRTC as Your WebRTC Baseline Reference?

Mon, 02/08/2016 - 12:00

If you aren’t using AppRTC yet then you should start.

I had a few customers last month who had quality issues with their service. They were trying to understand the root cause of these issues, and at times, the question raised was “is WebRTC up for the task?”

  • Does the poor audio quality we experience in our service derive from the codec, the browser’s implementation or something in our own backend?
  • Are the video stutters stem from heavy packet loss and that’s just life – or are we adding some of our own issues into the mix?
  • The average bitrate we reach in a call. Is it because the browser is limiting us? Is it because the connection is bad? Is it…

The list goes on.

The fact that now you get a fully implemented media engine in the browser for free is great. The problem is, it gives you (or your developers) the opportunity to blame the browser: It isn’t us. Google’s engineers did such a crap job with X that we just can’t fix it.

More often than not – this won’t be the problem.

When in doubt – check AppRTC

Google launched AppRTC quite some time ago.

AppRTC is Google’s way of showcasing WebRTC in their simplest version of the “Hello World” program. This being WebRTC, there are many moving parts, but to some extent, AppRTC is rather baseline – especially in its dealings with media.

This makes AppRTC a great baseline reference when you have issues with the media paths of your own service or just want something to compare it with.

Got an issue? Test what happens when you run AppRTC and compare it with your own service. If you see that your service isn’t performing in the same manner, chances are the problem is on your end – and now you can start diverting focus and resources towards searching the problem instead of blaming the browser.

Where to look for the problems?

  • Your NAT traversal servers, if they are being used
  • Are you doing any backend processing for the media? Map your pipeline there. Check each step of that pipeline to see if it is to blame
  • Transcoding never fails to fail you – check there if you use it
  • Jitter buffers are notoriously… jittery. Make sure the implementation fits your use case
  • Network routes and handling dynamic bitrates and packet losses might be handled nicely by the browser, but is your backend up for the task as well?
Don’t forget test.webrtc.org either

Google has another great analysis tool – test.webrtc.org

You open the settings, insert your own STUN and TURN server configuration – and start the test.

It will then check the system and network connections to give you a nice view of what the browser is experiencing – something you can later use to understand the environment you operate in.

Why is this important?

With WebRTC, it is easy for developers to blame the browser. This isn’t productive.

Your first task should be to create a baseline reference you can trust. One that enables isolating the issues you are experiencing systematically.

AppRTC is a good place to start.

The post Are You Using AppRTC as Your WebRTC Baseline Reference? appeared first on BlogGeek.me.

Join us for Kranky Geek India: March 19

Sat, 02/06/2016 - 08:00

Our next Kranky Geek event is taking place in India.

Kranky Geek events? We did them twice. Both times in San Francisco. Both were very successful events. In both we didn’t know if these are one time gigs or something we want to continue doing.

Then we sat down to plan 2016, and came up with three planned events. The first one is taking place in Bangalore, India.

As with any Kranky Geek, this one is about developers of real time communications.

Like previous Kranky Geek events, it is free to attend. Sponsors take the burden of enabling us to plan this event and then pay for everything around it.

Google has been taking the lead here and helping us a lot in getting these events off the ground – in a way taking the leap of faith in our ability to manage these events.

India

Google asked us to do an event in India, so we happily obliged. For me, it would be the first time in India, making the excitement on my end even higher.

India makes sense in a lot of ways. Many of the vendors I end up looking are vendors that are local to India. Others are vendors with large development teams in India who end up doing a lot of the WebRTC development. Kranky Geek India gives me personally a great opportunity to meet many of these people in person.

To make things short:

Where? Bangalore, India

Exact location: MLR Convention Center

Date and time: March 19, 11:00 until we finish

How do I register? here

 

Our sponsors this time are Google, TokBox and IBM. Expect a large cadre of interesting speakers and topics – some local and some international in nature.

I’d love to see you with us at the event!

The post Join us for Kranky Geek India: March 19 appeared first on BlogGeek.me.

Subscribe to the New testRTC Blog

Thu, 02/04/2016 - 12:00

Just making sure you’re not missing out…

If you don’t know, in the last year I’ve been part of a great team of partners. We are building together a WebRTC monitoring and testing service called testRTC. The service is up and running for some time now with an increasing number of customers.

The most crappy part of our service was our website (not the one customers are using, but rather the one potential customers look at). So we updated it recently.

One of the main additions to that website is the new blog there. I’ve got an editorial calendar for it running until March with weekly content that I want to share with you, but felt that BlogGeek.me isn’t the best of places for it – it was too focused on testing or too related to testRTC.

What will you find in our testRTC blog?
  • Announcements about our service and the versions we are rolling out
  • Useful tips for testers, like the one we published yesterday about .y4m files and Chrome
  • Things we think you should take care of in your testing practices
  • Insights into our design decisions for our internal architecture
  • Test script samples of how testRTC can be used to handle certain WebRTC testing issues

So some of the content will be relevant to everyone while other parts of it for those using testRTC.

Subscribe now and follow us

If this sounds interesting, I suggest you subscribe to our blog or social media link(s):

 

The post Subscribe to the New testRTC Blog appeared first on BlogGeek.me.

Pages

Using the greatness of Parallax

Phosfluorescently utilize future-proof scenarios whereas timely leadership skills. Seamlessly administrate maintainable quality vectors whereas proactive mindshare.

Dramatically plagiarize visionary internal or "organic" sources via process-centric. Compellingly exploit worldwide communities for high standards in growth strategies.

Get free trial

Wow, this most certainly is a great a theme.

John Smith
Company name

Startup Growth Lite is a free theme, contributed to the Drupal Community by More than Themes.