We can do better.
In 2012, when I started this blog, I had only 3 WebRTC related posts in mind. One of them was about the room system of the future. While this has never materialized in the 4 years since, things have definitely changed in the video conferencing space.
Let’s see what video conferencing vendors have done about WebRTC so far (vendors are listed in alphabetical order).
AvayaAvaya’s assets in video conferencing comes from its acquisition of RADVISION.
A quick glance at the current website specs for its video conferencing line of products (mainly SCOPIA) shows a rather sad story. SCOPIA offers the best money can get, assuming we were 4 years after 2012 and WebRTC didn’t exist.
As the website states, you can “Experience crisp, smooth video quality with resolutions up to 1080p/60fps, stellar bandwidth efficiency, and error resiliency with H.265 High Efficiency Video Coding (HEVC) and Scalable Video Coding (SVC).”
Bolded tech words are my own.
Some things to note:
Cynicism aside, I have it from good sources that Avaya is working on adding WebRTC support to its gear. Where exactly does it fit in its bigger picture, and why so late is a different story altogether.
What bugs me the most here is that in the last 4 years, any advancement in the SCOPIA video conferencing product line was reliant solely on hardware capabilities. You can’t leapfrog in this way over competitors – especially when something like WebRTC comes into the scene.
It is sad, especially since Avaya does work and promote WebRTC in contact centers already. At least on the press release level.
CiscoCisco is a large and confusing company. If you look at its telepresence products, they resemble the ones coming from Avaya. Same highlights about speeds and feeds.
On the other hand, Cisco has thrown its weight behind a new product/service called Cisco Spark.
Cisco Spark is a Slack lookalike with a focus on voice and video communications by connecting to the massive line of products Cisco has in this domain. Cisco Spark uses WebRTC to drive its calling capabilities in browsers. What Spark enables is connectivity from web browsers using WebRTC to Cisco video conferencing products.
Cisco took the approach of using H.264, making it work only on Firefox and in future Chrome versions (unless you run the new Chrome 50 from command line with the necessary parameter to enable H.264).
Cisco has also been heavily investing in acquiring and nurturing its WebRTC chops:
Cisco has a huge ship to steer away from hardware and it is pouring the money necessary to take it there.
Google HangoutsWebRTC. Chrome. Hangouts. Google. All connected.
Google invested in WebRTC partly for its Hangouts service.
Today, Hangouts is using WebRTC natively in Chrome and uses a plugin elsewhere – until the specific support it needs is available on other browsers.
Google also introduced its Chromebox, its take on the room system. I am not sure how successful Chromebox is, but is refreshing to see it with all the high end systems out there that don’t know a word in WebRTC. It would have been nicer still if it could use any WebRTC service and not be tied to Hangouts.
The problem with Hangouts is its identity. Is it a consumer product or an enterprise product? This is hurting Hangouts adoption.
LifesizeLifesize was a Logitech division. It was focused on selling hardware room systems.
In 2014, Lifesize launched its own cloud service, starting to break from the traditional path of only selling on premise equipment and actually offering a video conferencing service.
In 2015, it introduced its WebRTC support, by enabling browsers to join its service via WebRTC – and connect to any room system while doing so.
2016 started with Lifesize leaving from the Logitech mothership and becoming an independent company.
Microsoft SkypeSkype has done nothing interesting until 2015. At least not when it comes to WebRTC. And then things changed.
Skype for Business, Skype for Web and the Skype SDK were all introduced recently.
Skype for Web started off as a plugin, which now runs natively on Microsoft Edge – the same initial steps Google took with Hangouts.
My own take here:
Or should I say Mitel?
Polycom added WebRTC support in its launch of RealPresence Web Suite. In traditional enterprise video conferencing fashion, it seems like a gateway that connects the browser to its existing set of products and tools.
At almost the same time, Polycom shed its Israel office, responsible for its MCU. This is telling as to how transformative is WebRTC in this market.
VidyoVidyo had a love-hate relationship with WebRTC throughout the years but has done a lot of work in this space:
2016? Two things already happened this year with WebRTC:
In a way, Vidyo is well positioned with its SVC partnership with Google to offer the best quality service the moment Chrome supports VP9/SVC. They also seem to be the only video conferencing vendor actively working on and with VP9 as well as supporting both VP8 and H.264. Others seem to be happy with H.264/VP8 or running after H.265 at the moment.
The New EntrantsThere are also some new entrants into this field. Ones that started at the time WebRTC came to being or later. The ones I am interested in here are those that connect to enterprise video conferencing systems.
These include Unify, Pexip, Videxio and many others.
What defines them is their reliance on the cloud, and in many cases the use of WebRTC.
They also don’t “do” room systems. They are connecting to existing ones from other vendors, focusing on building the backend – and yes – offering software connectivity through browsers, plugins and applications.
My room system dreamsI’ll have to wait for my WebRTC room system for a few more years.
Until then, it is good to see progress.
The post How Video Conferencing Vendors Adapt to WebRTC? appeared first on BlogGeek.me.
Why and where do we use SVC exactly?
[When Alex Eleftheriadis, Ph.D., the Chief Scientist & Co-founder of Vidyo, approached me about writing a piece about SVC and WebRTC – how could I refuse? Someone had to give the explanation, and what better person than Alex to do that?]
Just when the infamous WebRTC video codec debate appears to have been settled, with both H.264 and VP8 being set as mandatory-to-implement by browsers, VP9 has started making inroads into the WebRTC software stack and into browsers themselves. Indeed, Chrome 48 includes, for the first time, VP9 support for WebRTC. Firefox also includes support for it in WebRTC in the Developer Version of Firefox 46.
Why is this relevant for the WebRTC community – users and developers? First off, VP9 offers significantly better compression efficiency compared with H.264, and even more so compared with VP8. This translates to better quality for the same bit rate, or a lower bit rate for the same quality (as low as 50%). This by itself is a big plus, but it does not tell even half of the story.
The Need for ScalabilityWhen using WebRTC beyond two-way, peer-to-peer calls, or in networks with significant quality problems, system architects are encountering the same design issues that the videoconferencing industry has been dealing with for a long time now. It is not accidental then that WebRTC solutions designed for multi-point video gravitate towards those offered in videoconferencing, or that videoconferencing companies are adapting their systems to become WebRTC solutions. For the latter, this typically entails aligning with transport-level, security, and NAT traversal specifications, and of course providing a JavaScript library that enables WebRTC-enabled browsers to use their system’s facilities.
If we look at today’s architectural landscape for high-quality multi-point video, there are two main designs. One is based on transmission of a single stream of scalable coded video. Scalable means that the same bitstream contains subsets, called layers, that allow you to reconstruct the original at different resolutions. If you get the lowest, or base, layer you can decode the video at a certain resolution, whereas if you also get a higher, or enhancement layer, you can decode the video at a higher resolution. This is great for robustness and adaptability, because you do not need to process the video at all to get at the different resolutions.
The second design is based on simulcast transmission of two separate streams that encode the same video at different resolutions. Contrary to the scalable design, here we have two encoding passes rather than one, with the associated streams requiring a higher bitrate compared with scalable coding. It is also less error resilient. On the plus side, however, simulcast allows the use of older, non-scalable decoders. This has been an important consideration for systems that interface with legacy devices (not relevant for WebRTC).
Single Layer, Scalable, and Simulast Coding of Video. In scalable coding the various layers (“a” and “A”) are multiplexed in a single stream. In simulcast two or more independently encoded streams are produced and are transmitted separately.
Both of these designs utilize a special type of server for which I have coined the term “Selective Forwarding Unit” (SFU). This type of server was not known when the original RTP Topologies RFC was published in 2008 (RFC 5117), but it is now included in its 2015 update, RFC 7667.
The operation of the SFU, using the VidyoRouter as an example. In the diagram the SFU receives three scalable streams, and it selects to forward the full resolution for the blue participant (base and enhancement layers), but only the base layer for the green and yellow participants.
The SFU works in the following way: it receives scalable or simulcast video, and it decides which layer or which stream to forward to a receiving participant. There is no signal processing involved, and the operation incurs very little delay (less than 10 ms is typical). If we contrast this with the traditional architectures that are still being used and involve transcoding of multiple videos, the advantages are obvious – both in terms of processing complexity but also in terms of delay (150 ms delays would be typical for the traditional architectures). Minimizing delay is hugely important for perfecting the end-user experience.
What is interesting is also how the receiving endpoint operates. Contrary to legacy videoconferencing systems, it receives multiple streams that it has to individually decode, compose, and display on the screen. This multi-stream architecture perfectly matches WebRTC’s design.
The multi-stream architecture of an SFU endpoint – the endpoint receives multiple video streams that it has to individually decode, and composite on the user’s screen.
To appreciate the significance of these architectures it suffices to point out that both Skype for Business and Google+ Hangouts use simulcasting (of H.264 and VP8, respectively). So does the open source VideoBridge by Jitsi. Vidyo, which first introduced the concept in its VidyoRouter product in 2008, is using scalability (with H.264 SVC). Simulcast support is now in the scope of the WebRTC 1.0 specification and it is being actively worked upon. Scalable coding is already supported by the ORTC specification, and will be addressed in WebRTC-NV (post 1.0).
Scalability, SVC and VP9Now we can turn back to our original question regarding scalability and VP9. If you want to be able to use an SFU architecture with scalable coding, the codec itself must support scalability. That’s why back in 2013 Vidyo announced that it would be collaborating with Google to develop a scalable extension for the VP9 codec. This effort is now bearing fruits.
One may ask, “why care about VP9, I will just use whatever stock codec my browser has and be done with it.” The answer is that you do want to care, when quality matters. Depending on the codec used, and the type of multi-point server architecture deployed, the end user will get a vastly different quality of experience.
We can think of the WebRTC endpoint as a kitchen that has a bunch of ingredients. If your expectations are low, you can go for the raw vegetables and have a meal in no time. If you want a fine meal, you will want both the right ingredients as well as the right recipe. The standardization process will ensure that the WebRTC kitchen has all the right ingredients. The recipe and, in fact, the cook, are all part of whoever is offering the service. By taking into account all the realities of imperfect network transmission, heterogeneous clients, mobility, etc., they make sure that the users enjoy a great experience. If you go with a proprietary solution, you can then add plenty of secret sauce.
Endpoint Quality Scale: One ordering of relative quality of different codec and endpoint engine combinations.
Taking into account the different combinations of video codecs and endpoint engines, I put together an “Endpoint Quality Scale” diagram, shown above. You can think of it as the skeleton of the multi-point video kitchen menu. Vidyo is vigorously trying to be the three Michelin star restaurant; its proprietary engine uses a lot of secret sauce in addition to the standard ingredients. But together with the industry as a whole we want to make sure that the menu, especially when it comes to WebRTC, offers something for all tastes and price ranges.
Bottom line, when people select platform providers for their WebRTC-based solutions they need to be aware of these differences and, especially when quality matters, make an educated and well-informed choice. Bon appetit.
The post Scalability, VP9, and what it means for WebRTC appeared first on BlogGeek.me.
This week we had a number of wonderful improvements to mod_avmd as well as more work toward languages in mod_verto.
Join us Wednesdays at 12:00 CT for some more FreeSWITCH fun! And, head over to freeswitch.com to learn more about FreeSWITCH support.
New features that were added:
Improvements in build system, cross platform support, and packaging:
The following bugs were squashed:
Today, the core architects of the Kazoo platform are in Milwaukee, WI working with the amazing FreeSWITCH team. The FreeSWITCH team runs an awesome open-source project that is on the bleeding edge of communications - always. Their software, libraries and RTP integrations allow us to power the audio portions of your call, and we’re working together to allow video and other features, too.
This year, our focus is on optimizing how Kazoo and FreeSWITCH integrate. We hope to expose more FreeSWITCH features natively. Our talks today will help shape the future of the Kazoo project in that regard.
We’re pleased to continue working with, and supporting where we can, the FreeSWITCH project. If you’re interested in learning more about the inner workings of how components of Kazoo work - FreeSWITCH being one of them - we’d encourage you to come to one of our upcoming FreeSWITCH trainings or to join the FreeSWITCH team at their annual ClueCon conference in Chicago.
Forecasts are overrated.
I’ve been asked time and again things related to the market sizing of WebRTC, and I’ve tried to shy away from it all the time. The Dilbert strip above explains why…
This whole notion of estimating the size of a market that is, to be frank, hard to define, without solid numbers, too new – all lead to the question: why bother?
What are you looking for? The market size of WebRTC contact centers? Is that only for the WebRTC piece of it? Greenfield ones? With or without call widgets in tiny WordPress sites? How do you place the monetary value on it? Is it the WebRTC part or the whole contact center you’re interested in? Do you want the number to amount to a billion $ and go backwards from there so it fits your desired strategy?
All useless.
2 billion users. X% CAGR. 15% YoY growth.
Sure.
Any day.
With something like WebRTC, such things are close to impossible as far as I can say, and probably not really worth it. Need to throw a number in the air? Generate it randomly. It’s good enough for the TSA then why not for you?
Which leads me to something you won’t find in my WebRTC PaaS report – the one dealing with the WebRTC API market and assists developers in understanding if they should use a vendor and help picking up the right vendor (if that’s the course selected). These estimates don’t help in such a case. They are worse than useless.
Need estimates? Find some other report online. They will happily share their guesstimates in press releases out there (seen a few lately) so you can decide if it is worth paying for to get that “validation” you need for your management.
Need to make real decisions on how and what to implement? That probably won’t be in these reports.
The post You Won’t Find Guesstimates in My WebRTC PaaS Report appeared first on BlogGeek.me.
Losing customers because of issues with your network service is a bad thing. Sure you can gather data and try to prevent, but isn’t it better to prevent issues in the first place? What are the most common pitfalls to look out for? What’s a good benchmark? What WebRTC-specific user experience elements should you spend […]
The post The Big Churn – learning from real usage stats (Lasse Lumiaho and Varun Singh) appeared first on webrtcHacks.
There’s progress, but the real action will be in 2017.
There has been a lot of chatter lately around Apple’s snail-like progress in supporting WebRTC and Microsoft’s announcements at their BUILD conference. I am still left under-impressed but positive and confident. Here’s why.
Apple and WebRTCLet’s start with Apple. The only official statement we will get from Apple will be “we have WebRTC”. Question is when, for what and if at all.
We have indications of progress in Apple, and Alex is keeping us updated on the goings with Apple and WebRTC.
I think Itay Rosenfeld is making a good case why Apple needs WebRTC more than WebRTC needs Apple.
So we know WebRTC is of interest to Apple and we know it is being added to Safari.
We know one more thing. Apple is actually trying to refresh and update its Safari browser. Dare I say “modernize” it. They even recently started a Technology Preview for Safari, joining the rest of the gang of browser vendors to showcase their upcoming plans and intentions. That doesn’t include WebRTC, but WebKit indicates WebRTC as “in development” – and WebKit is the rendering engine used by Safari.
Will Safari include WebRTC? Yes.
When? My guess is end of 2016.
What will it include? WebRTC. H.264. No VPx “nonsense”.
Where? On Mac OS X, but not on iOS. That one will come in 2017.
Microsoft’s Romance with xRTCMicrosoft added ORTC to Edge. I shared my view about Edge already. To sum it up – great browser. No adoption.
To date, there has been little adoption of Edge/ORTC by vendors. If my memory serves me right, adopters include Twilio, &yet and Frozen Mountain. That’s less them impressive. And Microsoft knows that.
The problem here isn’t ORTC. It is Edge. And Microsoft seems to miss that minor detail.
At the recent Microsoft BUILD conference, a few announcement were made (thanks @hcornflower for the tip):
"We now have more than 150 million monthly active devices using Microsoft Edge" — @morris_charles #EdgeWebSummit pic.twitter.com/2q4ZVFeF9k
— Kenneth Auchenberg (@auchenberg) April 4, 2016
So. “150 million” monthly active devices. But no monthly minutes as in their last disclosure. I wonder what monthly active means and how many of them open it up just to get to IE when Chrome doesn’t work. I know that’s how I use it to get to a Silverlight site that my kid wants to use.
I guess this number was high and positive that the managers at Microsoft decided to focus on it instead of the more important number of average use time per user. This led them to this decision:
MS announces new WebRTC goodies coming to Edge:
H.264/AVC, VP8, MediaRecorder, DTLS 1.2, ECDSA certs pic.twitter.com/jDwRug2F13
— Justin Uberti (@juberti) April 4, 2016
It would have been better to just add WebRTC to IE11 in parallel than to entice users to switch altogether to Chrome.
When will vendors need to revisit Edge when it comes to WebRTC? Not before Q4 2016.
Microsoft SkypeSkype is interesting. Late to the market. 300 million active users. A lot, but unimpressive if you compare to the leading consumer communication services that are out there.
Skype for Web is what Google Hangouts did the first two or three years of WebRTC’s existence – took components of the WebRTC implementations, modified it to fit their needs and made a plugin for Hangouts out of it. Until they just made it “native” to the browser when they could.
As written in a recent comment I’ve read – they should have done this 5 years ago, but better late than never.
The more interesting part here is the newly minted Skype SDK. I think this is Skype’s third attempt at an SDK – there may have been more. Previous ones were failures. Not because of lack of adoption, but rather because the way developers were treated. This doesn’t bode well for this round. Especially not if you couple it with the current numbers and the size of Skype.
That said, I can easily see Lync/Skype for Business enterprises adopting the SDK to deal with customer support related requirements, taking a bit of the market from WebRTC PaaS vendors. To go beyond this use case, it will take more effort from Microsoft.
The Microsoft Skype for Web and SDK initiatives need to be viewed in the light of other players as well.
Cisco SparkCisco Spark (along with their Telepresence and UC offering) goes head to head against Lync/Skype for Business.
Cisco made several interesting moves lately:
That’s a lot of milage to go against Skype for Web and the Skype SDK.
You can easily say that when it comes to publicizing and marketing their investment in communication services and enables, Cisco is ahead of Microsoft.
Google HangoutsGoogle Hangouts is a shadow of what it can be when it comes to usage.
As a platform, it has it all. Everything you need to communicate, at a fraction of the cost of other solutions or for free. We use it daily at testRTC – both internally and to host meetings with customers and potential customers. We have no incentive to switch to anything else.
Hangouts adopted WebRTC from the beginning. First by embedding the WebRTC stack into the Hangouts plugin, using the components of WebRTC that it could, until it was able to just use WebRTC natively in Chrome. It still runs as a plugin on other browsers, but I assume that will change when WebRTC will be supported with all of the nuances of Hangouts.
What Hangouts is lacking is the traffic and the APIs to go along with its service. I am assuming Google are aware of it.
Apple FaceTimeApple has FaceTime. Its proprietary service that should have been standardized at some point.
I’ll be surprised if Apple did anything interesting or serious when it comes to connecting FaceTime to WebRTC or adding an SDK to it. Or god forbid, let the poor people of the world who use Android – or a 5 year old Windows PC, connect to FaceTime.
SlackSlack just added voice support with WebRTC and intends to add video. I’ve written about Slack a few times before, and how WebRTC is a logical investment for them. If they add integration points in their API that can access their real time communication capabilities it might become a very interesting player in the SDK/API space.
The real question in this case: Will a vendor using Slack continue using Skype in the long run?
Facebook and WhatsAppFacebook Messenger uses WebRTC. WhatsApp somewhat uses it.
Skype has 300 million monthly active users. That’s way smaller than WhatsApp’s billion and Messenger’s 800 million. I am assuming there’s more voice and video calling happening on Skype on average per user than on either Messenger or WhatsApp, but the trend is probably towards Facebook and not Microsoft here.
The reason Facebook is so strong here is their new initiatives towards enabling businesses connect with their user base – the Facebook user base directly, which is the largest social network at the moment. If they want, they can throw in voice or video interactions with an SDK on top of it.
WeChat, LINE, ooVoo and ViberAll have integration points. All heading in multiple directions for monetization. Be it businesses connecting to their user base, market places, digital currency or bots.
Leveraging Skype as an SDK means you want their reach and users base. But all of these messaging plaforms have their user bases in the hundreds of millions of active users as well. They essentially compete over similar mind share and budgets of enterprises.
What’s in store for us in 2016?More chatter and talks about Apple and Microsoft, but little in the way of progress by developers making use of Edge or Safari WebRTC capabilities. That will wait for 2017.
For Skype, there’s a challenge here, but also an opportunity. They can leverage WebRTC, focus on developers and come with use cases and success stories that will be hard to compete against. Microsoft is doing a lot already in this space, but there’s a lot more they need to be doing when you look at the competition they have.
Want to make the best decision on the right WebRTC platform for your company? Now you can! Check out my WebRTC PaaS report, written specifically to assist you with this task.
Get your Choosing a WebRTC Platform report at a $700 discount. Valid until the beginning of May.
The post Microsoft, Apple and WebRTC in 2016 appeared first on BlogGeek.me.
Messaging is used too much to stay only in the browser.
There seems to be a few conflicting trends going on at the moment:
This last trend is what I want to focus here. When all of the apps we use are now browser web apps on the PC, there are generally two types of apps I still install on my laptop:
When it comes to communications, though, I prefer pinning tabs to the browser for the most common tasks I have – or just leave it to my phone. WhatsApp, Slack, Gmail – all get a pinned tab on Chrome for me. Whenever I need to use messaging in other domains (Facebook, LinkedIn, Meetup, Upwork, etc) – I just open a new tab in Chrome “on demand” and then close it once done.
I assume others install apps locally on Windows for things they want to use frequently. Which brings me to two interesting developments from the last year or so:
Great.
So we are now taking HTML5 web apps, wrapping them as Windows apps and install them locally.
It probably makes sense for a lot of the enterprise messaging apps – instead of just living inside the browser, be part of the installed set of apps on the desktop. Purists of WebRTC will complain that this is not how its done. Detractors of WebRTC will say it isn’t WebRTC at all. I’ll say it is just another way of using the technology.
If you want to take your own communication web apps and make a desktop application out of them, then the most popular approach these days that I know of is CEF – Chromium Embedded Framework. It takes your web app, and packages it with Chromium so that they both get downloaded and installed together.
I assume that this is what Slack used. I am not sure about the Facebook Messenger one though – the addition of Windows tiles is a complication, but probably solvable.
In a way, web and HTML5 have already took over our desktop. Even in apps what you get is HTML5 these days.
I wonder if and when will this trend hit mobile, and if so, will it be achieved via the new Progressive Web Apps approach.
The post Messaging is Migrating from the Browser to the Desktop appeared first on BlogGeek.me.
This week we had a number of awesome new features, but the most work went into adding translations to the verto communicator application! And we definitely couldn’t have done it without help from our wonderful community members! So far, we have translations for English, Spanish, Portuguese, Italian, French, Danish, German, Polish, Russian, Swedish, Indonesian, and Chinese. If you speak another language and would like to see that language available please submit a pull request with a translation!
Join us Wednesdays at 12:00 CT for some more FreeSWITCH fun! This week we have Chad Phillips talking about his FreeSWITCH Kickstart project! And, head over to freeswitch.com to learn more about FreeSWITCH support.
New features that were added:
Improvements in build system, cross platform support, and packaging:
The following bugs were squashed:
Not all of them.
Who is investing in its platform?Twilio. Added a slew of services in 2015.
TokBox. Got a new Spotlight live broadcast service. But not only.
VoxImplant. Added HD to its audio conferencing.
The rest? Not really sure?
Most of the time, when people talk to me about their use case, and the need to pick a specific platform, it boils down to a shopping list of features. They want everything. Usually more than any single vendor can offer. When prodded further, they reduce the need to a small set of requirements. But then again, they do see in their future these added set of features.
In many cases, selecting a vendor means understanding which of them might have what you need in the future down the road in their roadmap – not necessarily in their service today, but they will get there by the time you will.
Guess what – this is another factor that needs to included to the list of requirements you need to look at when selecting a vendor to work with.
This is why in the latest release of my “WebRTC PaaS report”, I am adding a new section, which will give a quick indication to which vendors made changes to their platform (and if these changes were serious or not). The information there will date back two years, giving some perspective.
If you are thinking of stating to use one of the WebRTC API platforms out there and not sure which one to use, then this report may come in handy. Until this next updated release, I’ve taken the price down considerably – if you purchase now you pay $1250 instead of $1950 and you get a year of updates (so that the updated version will be yours next month the moment it gets published).
Check the WebRTC PaaS report page to decide if you need.
The post Which WebRTC PaaS Vendor is Investing in His Platform? appeared first on BlogGeek.me.
Novatel E371 (also known as Dell DW5804) is sold for less than $30 at Aliexpress, and it’s so far the cheapest 4G/LTE WWAN card suitable for PC-Engines APU.
The initialization is fairly simple, although it was tricky to find the right command (AT$NWQMICONNECT=,,).
cat >/etc/chatscripts/lte_on.E371 <<'EOT' ABORT BUSY ABORT 'NO CARRIER' ABORT ERROR TIMEOUT 10 '' ATZ OK 'AT+CFUN=1' OK 'AT+CMEE=1' OK 'AT\$NWQMICONNECT=,,' OK EOT cat >/etc/chatscripts/lte_off.E371 <<'EOT' ABORT ERROR TIMEOUT 5 '' AT\$NWQMIDISCONNECT OK AT+CFUN=0 OK EOT cat >/etc/network/interfaces.d/wwan0 <<'EOT' allow-hotplug wwan0 iface wwan0 inet dhcp pre-up /usr/sbin/chat -v -f /etc/chatscripts/lte_on.E371 >/dev/ttyUSB0 </dev/ttyUSB0 post-down /usr/sbin/chat -v -f /etc/chatscripts/lte_off.E371 >/dev/ttyUSB0 </dev/ttyUSB0 EOTQualcomm Gobi 2000 is quite old (released 2009), but decent 3G modem, able to deliver up to 7Mbps in downstream in PPP mode. These modems in mini-pcie packaging are available at Aliexpress for less than $10, and make up a great option for 3G connectivity for PC Engines APU boards.
The modem needs a binary firmware to be loaded at the start. Numerous sources in Internet describe the ways to retrieve these files. The kernel driver in Debian 8 recognizes the modem as generic Qualcomm one, and sets up a QMI device (wwan0). But this model does not support packet mode, and you need to run PPP over ttyUSB1 device.
apt-get install -y gobi-loader wvdial mkdir /lib/firmware/gobi cd /lib/firmware/gobi wget --no-check-certificate -nd -nc https://www.nerdstube.de/lenovo/treiber/gobi/{amss.mbn,apps.mbn,UQCN.mbn} cat >/etc/wvdial.conf <<'EOT' [Dialer Defaults] Init1 = ATZ Init2 = AT+CGDCONT=1,"IP","internet" Phone = *99# New PPPD = yes Modem = /dev/ttyUSB1 Dial Command = ATDT Baud = 9600 Username = '' Password = '' Ask Password = 0 Stupid Mode = 1 Compuserve = 0 Idle Seconds = 0 ISDN = 0 Auto DNS = 1 EOT cat >/etc/network/interfaces.d/ppp0 <<'EOT' auto ppp0 iface ppp0 inet wvdial EOTAlso this script is useful for 3G connections, because with some providers, the Internet connection gets stalled every few days and needs to be re-connected.
San Francisco, CA - April 1st, 2016 - With skyrocketing demand for integrations between different services, 2600Hz is staying ahead of the competition with a unique integration named “Carrier Pigeons”. This service allows customers to place calls to a specific number/extension where you then leave a voicemail. 2600Hz seamlessly transfers the voicemail to a thumb drive. Via automated, scalable, elastic engineering robots, the thumb drive is attached to a pigeon who is dispatched automatically to the specific location determined by the extension. Using 2600Hz Mobile services, Carrier Pigeons are tracked via GPS until they reach their destination. Their progress is shown in an amazing user interface, the Monster Pigeon app.
2600Hz’s Co-Founder Darren Schreiber explains “We’ve seen what employing contract workers has done for the transportation industry, with services like Uber and Lyft. We see no reason why this can’t be adopted in the communications industry. The challenge in our industry is that people want their voicemail messages quickly - faster than a person or a car can deliver the message. We realized that Carrier Pigeons were a natural choice.“
Co-Founder Patrick Sullivan expanded on the project further: "We are always on the bleeding edge. Now that we have integration with Voice, Video and SMS/MMS we had to think outside the box. When sitting in our conference room talking about the future of business communication, we noticed a vast amount of resources sitting outside our window – Pigeons. Most see dirty animals with wings. We see a way to deliver secure messages to places that might not have fiber or any other transit to get information to. We also believe everyone has a crazy uncle or aunt ‘living off the grid’ and you might want to communicate with them. We now have a way to do just that. This gives us a huge advantage over the typical CLEC/ILEC infrastructure. The opportunities are endless.”
The program is in Beta and is invite only. Project “Carrier Pigeon” will launch publicly in the next couple months. 2600hz has already received multiple Silicon Valley investment offers and hopes to display the technology live, on stage, at TechCrunch, with other successful projects such as Pied Piper.
For questions and information requests please contact:
Captain Crunch
[email protected]
140 Geary St.
San Francisco, CA
Ph: 415-886-7900
The FreeSWITCH 1.6.7 release is here! This is a routine maintenance release and the resources are located here:
Release files are located here:
New features that were added:
Improvements in build system, cross platform support, and packaging:
The following bugs were squashed:
This week we have added functionality to support the NVENC hardware encoded h264 codec, files, and default on conference.
Join us Wednesdays at 12:00 CT for some more FreeSWITCH fun! And, head over to freeswitch.com to learn more about FreeSWITCH support.
New features that were added:
Improvements in build system, cross platform support, and packaging:
The following bugs were squashed:
No easy answer.
What route should your messaging implementation take?If there’s something I like is to write code. I haven’t done so in years, but it still is my passion. A year or two ago, I’ve done a small coding project for something I needed. After a whole day of coding it dawned on me that I haven’t checked my email, social networks or notifications the whole time – and didn’t even miss it. The only thing these days that can focus me on a single task at a time is programming.
When I did develop, and manage developers, there was always that tension of NIH in the air – the Not Invented Here syndrome that we developers are so good at. We want to develop stuff on our own and not “outsource” it to others. Hell – if I wrote a piece of code a year ago it was crap the next year and had to be rewritten.
I had the chance to listen in to Apigee’s recent webcast on Build vs Buy API Management. See it here:
This webcast goes over a lot of reasoning I see going on in any development project when the decision needs to be building build and buy.
The funny thing is that I don’t hear this kind of a discussion enough when it comes to messaging. Somehow, people think it is trivial.
I took a few of the concepts in this webcast, and “translated” them into the realm of build vs but for messaging.
Limited view of the scopeWhen a project starts, it seems that adding messaging isn’t that hard. You have a bunch of people. Maybe some presence indication. Run around a few Websocket messages for the text involved in the conversation and you’re done.
But is it really true, or is there more to messaging? It is far from trivial. Even simple things like delivering messages while disconnected or handling push notifications are notoriously hard to get right – even for those who should be the experts in it.
When you define what it is you need to build for your messaging, most often than not, you’ll be doing it with the following “mistakes”:
With limited scope comes the challenge of not comparing the right things when deciding between build or buy.
RISKEvery development project is risky. Purchasing an off the shelf solution usually mitigates the risk by having it done by someone else where the payment and deliverables are known in advance.
Developers tend to ignore risk – especially if the project is interesting enough to build. And yes. A distributed, low latency, high efficiency, large scale messaging backend written in Lua or Go is highly interesting.
You are not WhatsApp. Or NetflixBuilding your own messaging system is hard. It takes a lot of effort. WhatsApp seems so easy, but getting there is hard.
This shift towards in-app messaging that is occurring means that in most cases, messaging is becoming part of an IT project and not exactly an R&D project. As a company, this means the focus is elsewhere and that messaging is considered a commodity or a non-core technology.
In such cases, there is no real funding for ongoing development, support and maintenance of an in-house DIY messaging framework.
Can open source help?Sure, but is it at the right level of maturity?
There are a few dozen open source messaging frameworks out there. They probably do the work, but barely.
And the main challenge is that messaging is rapidly changing, which means that whatever is out there today is probably somewhat obsoleted or out of sync with what you need anyway – and getting it to where you need it means more investment on your end. Probably.
To top it all, with most of these open source initiatives, what you’ll find out that they have one main contributor behind them. That contributor is most probably a vendor who is offering support and proprietary modules to take care of commercializing the open source offering. Things like reporting, scaling, maintenance, etc. – all these will fall in the domain of proprietary and payment.
So if the idea from the start was to use open source to refrain from having to negotiate and work with a vendor, where does that lead you down the road? Isn’t it better to acknowledge the fact from the onset and find a suitable solution out of a larger set of available vendors?
Time To MarketI know. I know.
If you write your own messaging system, it will take you the better part of a weekend. Adding a bit of code and stability around it clocks it at a month. Nothing can beat that.
But what is it you are comparing here? Are you concerned about your prototype implementation or is that like production grade we’re talking about?
Getting something to production requires a lot more time.
Why are you even going DIY?Is it because it will be cheaper?
Because you’ll have more control over your future and destiny?
DIY is going to cost you in time and effort which you don’t necessarily have.
If and when this project of yours going to succeed, you’ll find out that with it more requirements and maintenance work is necessary. But what you’ll also find out is that the budget might not be there for you to handle that extra load in development. You promised the organization a working messaging system, and now that it is working – why are you asking for more funding exactly?
Easy? Hard? Core? Commodity?
I guess in most cases, deciding to develop your own messaging system requires a very good reason.
At testRTC we had that same need, though slightly different. We needed a way to communicate with the browser machines we’re running. It was all fine and well when the number of machines was rather small and their locations were simple. It became a real headache when we grew bigger and when customers started connecting machines in locations with flaky internet connections. We ended up using integrating one of the realtime messaging players for that purpose – and haven’t looked back at it since.
Messaging might seem easy, but it is pretty hard once you get to the details.
So why not outsource it and be done with it?
The post DIY or SaaS for Your In-App Messaging? appeared first on BlogGeek.me.
Phosfluorescently utilize future-proof scenarios whereas timely leadership skills. Seamlessly administrate maintainable quality vectors whereas proactive mindshare.
Dramatically plagiarize visionary internal or "organic" sources via process-centric. Compellingly exploit worldwide communities for high standards in growth strategies.
Wow, this most certainly is a great a theme.
Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.
Donec sed odio dui. Nulla vitae elit libero, a pharetra augue. Nullam id dolor id nibh ultricies vehicula ut id elit. Integer posuere erat a ante venenatis dapibus posuere velit aliquet.