I’ve spent many years of my career working on interoperability in communication systems. Back in the dark ages, I did SS7 interoperability testing. During my CLEC days, I ran a test lab that tested optical, telephony, and ATM/Frame Relay equipment. I’ve spent many years working on interoperability issues with SIP, starting with the SIP call flows (RFC 3665 and RFC 3666) and then SDP Offer answer (RFC 4317). I’ve also been to many SIPits (SIP interoperability events run by the SIP Forum), testing voice and video interoperability.
WebRTC poses some interesting interoperability challenges, but I am hopeful we will get it right.
There are four different areas of interoperability: browser, protocol, codec, and offer/answer. Lets go through them one by one.
Browser interoperability is about aWebRTC application or site working the same regardless of which browser the user is using. In the past browser interoperability was just a browser/server issue, but with the peer-to-peer media and data channel flows of WebRTC, this is now a browser/browser issue. The good news is that there are only a handful of browsers, so the interop matrix is not too large. The bad news is that there are signs of discord already in pre-standards implementations. For one thing, all browsers must utilize the same APIs, or else WebRTC will be a major headache for developers. Of course, libraries can hide this complexity from developers, but this will slow down deployment and produces some needlessly bad user experiences. If we see one browser vendor using their own APIs instead of using the standard ones from the W3C, then we will know that someone is playing company games at the expense of the Internet users of the world. Hopefully this won’t happen, but it if does, users will and developers will likely move away from that browser.
Protocol interoperability is a major concern for WebRTC. In the past, browsers didn’t implement many protocols – everything used HTTP (Hyper-Text Transport Protocol). Today, browsers are doing more, including WebSockets, and will soon move to the next version of HTTP, 2.0. With WebRTC, the browser RTC Function has to implement multiple protocols including RTP, ICE, STUN, TURN, SCTP etc. These protocols define “bits on the wire” and “state machines” that ensure that interoperability works. For browser-to-browser media and data channels to work, browsers must implement these protocols and carefully follow the standards. If they don’t the whole industry will suffer. There are some issues today with the pre-standard WebRTC browser implementations. For example, one browser today implements a proprietary STUN client that will not work with standard STUN servers. Browser vendors will need to take protocol interoperability very seriously, and recognize that this is something new for them and that they need to follow industry best practices and approaches.
Codec interoperability is about ensuring that media sessions don’t fail because there is no common codec supported on both ends of the session. There are so many codecs in use, and every vendor and service provider seems to have their own favorite one. Fortunately, we should be able to avoid this for audio codecs. The IETF has recently finalized the Opus audio codec for speech and music, published as RFC 6717 this month. It really is a fantastic codec, much better than all the rest, making it an easy choice as one mandatory to implement (MTI) codec for WebRTC. Opus is also available as open source. The other MTI codec is G.711, also known as PCM, which provides interoperability in the VoIP and telephony world, and is also needed for interworking with the telephone network. Video codec choice is much more difficult. While H.264 is widely used today, there are no open source implementations or royalty-free licensing available for browsers or implementors. As such, it is very difficult to see how it could be chosen as a MTI video codec. Google’s VP8 video codec is proposed as an alternative, and is available in open source. However, there is much uncertainty about the licensing status of VP8. Should WebRTC deploy without common video codecs, this again could result in interoperability delays.
Offer/answer interoperability is perhaps the least understood, but most important area. Offer/answer refers to the negotiation of codecs, parameters, and settings for the media session or data channel between the two browsers. Even if both browsers use common APIs, standard protocols, and common codecs, if they are unable to successfully negotiate and configure their media or data channel, the connection will fail. WebRTC uses Session Description Protocol (SDP) to do this offer/answer exchange. The pre-standard WebRTC implementations are, frankly, a mess in this area. Their SDP is not standard, and not interoperable with anything else. It will take a lot of work to get this right, and we all must insist that browser vendors support standard offer/answer negotiations.
Occasionally, it is suggested that perhaps offer/answer would be easier if we didn’t use SDP. We all know and hate SDP, and it is ugly and awkward to use. However, it has taken over a decade’s work and experience to make it work, and any replacement would likely take that many years to get to work. And, in addition, since much of the standards-based VoIP and video world uses SDP, it would need to map to SDP as well. I can’t see this helping interoperability in any way. Previous efforts to replace SDP failed (anyone remember SDPng?) and I think anyone advocating replacing SDP needs to explain why a new effort wouldn’t meet a similar end, and why this effort wouldn’t take a decade. Also, the complexities of offer/answer relate to the complexities of negotiating an end-to-end session, and the actual syntax of the descriptions are a very small part of the complexity.
So WebRTC definitely has some interoperability challenges ahead of it. Fortunately, there are
many experienced engineers who are participating and helping with the effort. As long as the browser vendors take this seriously and don’t play games, I think WebRTC will have good interoperability, which will benefit web developers and web users alike.
If you are interested in WebRTC, you might like my new book “WebRTC: APIs and RTCWEB Protocols of the HTML5 Real-Time Web” published this month by Digital Codex LLC.
Today, I’m excited to announce the publication of my new technical book entitled “WebRTC: APIs and RTCWEB Protocols of the HTML5 Real-Time Web”. The book introduces and explains Web Real-Time Communicatons (RTC), a hot topic in the web and Internet Communications industry right now.
Many of us enjoy services such as Skype, but you have to download the app and install it before you can talk to anyone. WebrRTC browsers have all this built into them – no download, no codecs, no Flash, no plugins needed! This will be really popular with web users. Imagine what Google or Facebook could do with this?
If you want to try WebRTC today, it is already in Google’s Chrome Canary (developers version). There are sites out there today live – I’ll share them in future posts. It will be available in most browsers starting next year.
If you want to learn about WebRTC, you might find my book, written with my co-author Daniel C. Burnett from Voxeo), useful. I enjoyed writing it!
Feel free to interact with us on social media, Google+ or Twitter. Comments, suggestions, and opinions are most welcome.
A year and a half ago I embarked on my first self-publishing experience when I published my first novel, Counting from Zero. I had written several other books before, but they were technical, non-fiction books, and I used conventional publishers who handled so many aspects of the book.
Self-publishing was a revelation for me, and I found that I relished the speed, control, and flexibility it gave me. I have had so many wonderful experiences after publishing the book; now I can hardly imagine that I once thought that perhaps it would never be published!
I am in the homestretch of a new self-publishing experience, which has also been a revelation. This time, I am about to self-publish my first non-fiction, technical book title! Stay tuned for an announcement shortly, perhaps on Monday when I am speaking at an industry conference… perhaps. I won’t talk about the book or the topic today, but I do want to share my experiences, and how it has been similar or different for non-fiction vs fiction.
So firstly, why did I choose to self-publish rather than go back to one of the publishers I had worked with in the past? The same reasons as for my novel, which are:
- Speed: My co-author and I finished editing and writing the book just this week. Next week we will have a box of books in hand and a paperback and Kindle edition for sale on Amazon. It just doesn’t get any better than this, especially when your goal is to publish the first book on a given topic.
- Control: Publishers often influence the content of a technical book. They will suggest adding chapters, or including other points of view. Often this is useful, but in this case, for the first time, this to-be-published book contains exactly what I want, and says it exactly as I want to say it. To paraphrase MasterCard, this is priceless! And, I can control pricing. My previous books have been incredibly expensive – this book will be incredibly cheap.
- Flexibility: Timing is everything in technical book publishing. And the ability to provide timely content at the right time is critical. This book will be up-to-the-minute accurate. In addition, we plan to do frequent new editions to track the fast-moving field. I have done multiple editions of some of my previous books, but usually at 2-3 year intervals. This time, we plan to do new editions in 3-4 month intervals! I know it sounds crazy, and it may turn out to be so, but the point is we can try out this new model, where we put out a book using a software release model, rather than a book edition model.
So, what are the downsides of this do-it-yourself model? Mainly just the work involved! Laying out my fiction book was trivial, but doing the same for my non-fiction book was extremely involved. I had to integrate figures, captions, tables of contents, lists of figures, etc. My publisher provided all these things in the past, but now it was all down to me and and my co-author.
I’m happy to say we have been successful, and initial feedback from our reviewers is very positive. I can hardly wait for Monday! In my mind, there is no doubt this book will be successful, and it will help the industry and fellow professionals learn about new opportunities.
I guess it is obvious that this self-publishing fad is likely to stay, even for technical non-fiction books.
If any of you have had self-publishing experience with a technical book, I’d love to hear your experiences. I’ll keep sharing the lessons I’m learning every day in this incredible experience.
Today, Eric Krapf’s NoJitter published an interview with me “Where Do We Stand With SIP? An Interview with Avaya’s Dr. Alan Johnston”.
One activity is SIPNOC, the SIP Network Operators Conference. The second SIPNOC will be held this June in Herndon, Virginia. The Call for Presentations just went out. Last year’s event was excellent, and I’m really looking forward to this year’s.
The other is the SIPconnectIT interop testing events, planned for later this year. They will be modeled after the incredibly successful SIPit SIP interoperability test events, but with a focus on SIP trunking and the SIP Forum’s SIPconnect 1.1 Recommendation.
Perhaps see some of you at these events!
Tomorrow is a world-wide day of protest against SOPA and PIPA, as they are being discussed in the United States Congress. As I discussed last month, these bills must be stopped, or the Internet as we know it today will be no more. To explain in technical terms, SOPA and PIPA are a Really Bad Idea.
If you have a website and care about the future of the Internet, why not join in? If you don’t but still want to participate, blog or microblog – tell your friends, family, and acquaintances about this historic event.
We must stop SOPA and PIPA, and ensure that Chinese-style and Iranian-style Internet censorship does not happen in America.
It is just over a month since Amazon announced KDP Select, opening their Kindle Owners Lending Library to independent publishers. After deliberating the pros and cons, I took the plunge, giving the program a try. It has certainly been interesting!
Today, Amazon announced the results of the program so far. First, I’ll share my experience with the program during this month.
After I signed up my techno thriller Counting from Zero, reluctantly saying goodbye to Smashwords, I didn’t have long to wait – the borrows started happening immediately. After Christmas, I saw another wave of borrows, presumably new Kindle owners. Then, in the first few days of the month, another surge. (I presume this means that borrows are done on a calendar month rather than 30 day periods. If this is true, we will often see lots of borrows at the start of the month.)
In the 3 weeks of December KDP Select was active, for my eBook, borrows represented 18% of Amazon activity (sales plus borrows). For January so far, the percentage is about 16%, but with higher numbers of both sales and borrows. I’d estimate overall sales seem to be up about 25% since Christmas. Since my sales have increased but my sales ranking has not, this seems to be a general trend, at least in my category. So, looking at my numbers, since the non-Amazon eBooks sales I gave up to participate in KDP Select only accounted for 5% of my total sales, I appear to be ahead of the game, at least in terms of numbers. But the question was what would publishers get paid for borrows? Amazon did not commit to any royalty rate when the program was launched, instead saying authors would share a $500,000 pot of money based on borrowing numbers.
Amazon answered that question today in announcing that KDP Select authors will receive $1.70 for each borrow in December, based on 295,000 borrows in December. For my relatively low-priced eBook of $2.99, this isn’t much lower than my normal royalty for a sale, which is about $2. I have yet to try out a free book giveaway day, so I can’t share my experience with this aspect of KDP Select, but I hope to soon.
So, one month in, I do not regret my decision to give KDP Select a try. I see no reason why I won’t renew (re-enlist?) in two more months. However, I am still unhappy about the exclusivity requirement, as are many other independent publishers. Amazon, if you are paying attention, this requirement just stinks and you should drop it.
How was your month with KDP Select?
Are you ready for the kickoff of the 2012 US FIRST Robotics Competition (FRC)? I am! In about 12 hours I’ll be in the middle of an excited group of high school kids at the St Louis Planetarium watching a video broadcast from Southern New Hampshire University – Manchester Campus via NASA broadcast where this years competition will be announced!
Last year was my first season of mentoring the Roborebels Team 1329. It was an amazing experience. I can’t wait to hear this year’s challenge. Will the robot be required to drive, climb, grab, jump, or even swim? Or some combination of these? We will find out soon! We will spend the weekend brainstorming designs and poring over the design specs. At the end of the 7 week build season the robots will be put to the test is friendly co-optition as founder Dean Kamen likes to describe it.
I was amazed by what the students designed and built last year. I’m sure this year will be the same.
From an educational side, it is such an inspiring sight to see so many young men and women getting excited about science and engineering. After all, it will be up to them in a few years to take the lead in our economy and solve the next generation of technical challenges.
In the mean time, I need to put on my safety glasses and roll up my sleeves to help the students.
Best of luck to everyone involved!
As 2011 draws to a close, I wanted to take a moment to thank everyone who has helped me this year. It has been an amazing year! Here’s a short list of my highlights:
– In January I gave a SIP Tutorial for the FCC staff in DC. It was a great event, and hopefully I will get another chance to do it again in 2012. The FCC has lots of VoIP and SIP work to do with the transition of the PSTN and E911 to all VoIP. Hopefully we can soon end the ridiculous subsidies for rural telephone service and instead use them to subsidized high speed Internet service for rural areas. My friend Henning Schulzrinne was just appointed Chief Technology Officer, so I know the FCC is in good hands technically. I also enjoyed giving the SIP Tutorial in Miami, Sydney, and Austin.
– In February I published my first novel, a Techno thriller about a massive attack on the Internet that gives this blog its name – Counting from Zero. Little did I know how much hacking and security stories there would be in 2011. Some have even called 2011 the Year of the Hactivist, which is hard to argue with. Overall, I couldn’t be happier with the response to the book. Thank you do much to anyone who has read, reviewed, tweeted, or blogged about it – I am very grateful. Look for more book news in early 2012…
– In March I participated in my first robotics competition. The experience was amazing, and I look forward to the start of another build season in just over a week!
– In April, the ZRTP VoIP media security protocol was published as an RFC by the IETF, after 6 years of hard work. Editing this document is my small contribution to making the Internet more secure. Here’s to more adoption and deployment in 2012.
– In May the RTCWEB Working Group was chartered by the IETF. The work is progressing slowly but steadily. I expect more progress in 2012, and hope for some strong security to be built into the protocols – lets show that we have learned something over the years…
– In June, I participated in the first ever SIP Network Operators Conference or SIPNOC for short. It was a great success and really shows how SIP has grown up. I am privaleged to have another term on the Board of Directors of the SIP Forum. With the publication of SIPconnect the SIP Trunking recommendation, the business use of SIP continues to grow and expand.
– In November, I has my first experience as a cricket coach. My son started the Priory Amateur Cricket Association or PACA as a club at his school. It has been a blast so far helping the boys learn the basics of cricket. They have done a great job, although we need to reduce the number of no balls! In 2012 we plan to play a one day match against a local cricket club.
So, here’s to 2011 – it was definitely an interesting year! I hope it was a good one for you and yours. Here’s to 2012!
Next month, I’m excited to be giving a public lecture sponsored by The Tuesday Women’s Association (TWA) and the American Association of University Women (AAUW). It is part of their 2012 International Relations Lecture Series and is entitled Cyberspace: A New Cold War Front. It will be held on January 10, 2012 at 10:45am at the Ethical Society building on 9001 Clayton Rd., St. Louis, MO 63117.
I’m really looking forward to it. I’m used to lecturing at Washington University, and giving industry tutorials, and making business and standards body presentations, but a public lecture like this is is something different!
And this is a really interesting topic, too. I’ll be talking about Stuxnet, and other industrial cyber espionage. I’ll get to talk about the attacks on Google originating from China. I’ll talk about hacking as a weapon in various conflicts between Russia and former Soviet republics.
Of course, I’ll try to educate about computer and Internet security, drawing some examples from my techno thriller cyber crime mystery Counting from Zero. While it is mainly about cyber crime for profit, the techniques and attacks are similar.
If you are in St Louis, it would be great to see you there. If not, maybe I’ll post a recording or at least my slides on this blog.