On Saturday, I gave a presentation and demo of ZRTP at Hackfest 2013, organized by the Washington University in St. Louis chapter of ACM (Association of Computing Machinery) .
A group of about 60 undergrads had gathered in Urbauer 211 to learn about hacking and try it out. I gave a short presentation about ZRTP, the media path keying protocol for SRTP invented by Phil Zimmermann.
I was fortunate to serve as the editor of the ZRTP specification, which was published as RFC 6189 two years ago. I showed how ZRTP allows users to detect the presence of a MitM (Man in the Middle) attacker by checking the Short Authentication String.
Here is a PDF of my presentation.
Then I used the Jitsi open source voice, video, & chat application to demo ZRTP. Emil Ivov, founder and chief developer at Jitsi answered my ZRTP call, and we checked the SAS. The sequence of steps used to secure the voice & video session is shown in this animated GIF.
Afterwards, I gave away a copy of Counting from Zero, my technothriller that incorporates elements of ZRTP, hacking, exploits, and zero-day attacks.
We then spent the rest of the afternoon playing with Metasploit on an isolated network of virtual Windows machines. It was an interesting day. Just like at IETF meetings, the biggest excitement of the afternoon was when the cookies arrived!
Perhaps at next year’s session, we can try out VoIP hacking tools such as SIPvicious!
The giveaway is hosted by goodreads, the social reading site. If you love to read but haven’t found Goodreads, you should check it out!
From now until November 9, you can sign up to win a paperback copy. Winners notified on November 10.
WebRTC, Web Real-Time Communications, is a fast moving topic these days! Here are a few of my suggestions for how to keep up.
First a note about terminology. Although Google named their open source project webrtc, WebRTC is not just a Google project, it is a major industry initiative involving open Internet standards being developed by many participants. Don’t confuse these two!
Google and Mozilla are the browsers most actively implementing WebRTC today. WebRTC is available in Google Chrome Beta browser. Download and give it a try for the latest WebRTC extensions. Some future WebRTC capabilities may be in Google’s Chrome Canary which is the developers preview version of the browser. To experiment with Mozilla Firefox, you will need to use their nightly build. Microsoft Internet Explorer and Apple Safari don’t yet have anything available, but you can track their future announcements here and here.
WebRTC is not just about browser deployments, it is about standard APIs and standard protocols. To really follow what is going on in WebRTC, you need to track the standards being developed in the W3C and IETF. This can be a bit tricky, but if you start with the W3C WEBRTC Working Group and the IETF RTCWEB Working Group, that is a good start.
If you have an eReader, try this out. Here is a link to download the entire set of RTCWEB IETF Internet-Drafts in EPUB format and here is the set in MOBI format. Various other sets of IETF documents and RFCs is also available at http://tools.ietf.org/ebook/. The conversion is done using a script written by Tero Kivinen – nice job! The formatting of the ASCII art is not 100%, but this is a difficult problem. The MOBI format worked better for me than the EPUB version, but YMMV. Perhaps one day the IETF will adopt a friendlier format for Internet-Drafts and RFCs, but I’m not holding my breath!
3. Try WebRTC sites and applications
There are a number of sites and applications already taking advantage of WebRTC features. One of my favorites is FrisB, a cool new way to think about browser to PSTN communication. You can find plenty of others by searching the web. Also, many developers announce and discuss their WebRTC projects on Twitter, so searching with the #webrtc hashtag can find lots of cool things.
There are some interesting blogs out there on WebRTC, including a blog by Tsahi Levent-Levi.
For background on WebRTC, there are some decent resources. You might enjoy this video presentation by one of the editors of the W3C WebRTC specification, Cullen Jennings. If you like books, you might like “WebRTC: APIs and RTCWEB Protocols of the HTML5 Real-Time Web” written by myself and Dan Burnett, also a co-author of the main WebRTC spec and also the Media Capture and Streams specification.
Best of luck in following WebRTC! Feel free to share your own favorite ways and links to follow this work.
I’ve spent many years of my career working on interoperability in communication systems. Back in the dark ages, I did SS7 interoperability testing. During my CLEC days, I ran a test lab that tested optical, telephony, and ATM/Frame Relay equipment. I’ve spent many years working on interoperability issues with SIP, starting with the SIP call flows (RFC 3665 and RFC 3666) and then SDP Offer answer (RFC 4317). I’ve also been to many SIPits (SIP interoperability events run by the SIP Forum), testing voice and video interoperability.
WebRTC poses some interesting interoperability challenges, but I am hopeful we will get it right.
There are four different areas of interoperability: browser, protocol, codec, and offer/answer. Lets go through them one by one.
Browser interoperability is about aWebRTC application or site working the same regardless of which browser the user is using. In the past browser interoperability was just a browser/server issue, but with the peer-to-peer media and data channel flows of WebRTC, this is now a browser/browser issue. The good news is that there are only a handful of browsers, so the interop matrix is not too large. The bad news is that there are signs of discord already in pre-standards implementations. For one thing, all browsers must utilize the same APIs, or else WebRTC will be a major headache for developers. Of course, libraries can hide this complexity from developers, but this will slow down deployment and produces some needlessly bad user experiences. If we see one browser vendor using their own APIs instead of using the standard ones from the W3C, then we will know that someone is playing company games at the expense of the Internet users of the world. Hopefully this won’t happen, but it if does, users will and developers will likely move away from that browser.
Protocol interoperability is a major concern for WebRTC. In the past, browsers didn’t implement many protocols – everything used HTTP (Hyper-Text Transport Protocol). Today, browsers are doing more, including WebSockets, and will soon move to the next version of HTTP, 2.0. With WebRTC, the browser RTC Function has to implement multiple protocols including RTP, ICE, STUN, TURN, SCTP etc. These protocols define “bits on the wire” and “state machines” that ensure that interoperability works. For browser-to-browser media and data channels to work, browsers must implement these protocols and carefully follow the standards. If they don’t the whole industry will suffer. There are some issues today with the pre-standard WebRTC browser implementations. For example, one browser today implements a proprietary STUN client that will not work with standard STUN servers. Browser vendors will need to take protocol interoperability very seriously, and recognize that this is something new for them and that they need to follow industry best practices and approaches.
Codec interoperability is about ensuring that media sessions don’t fail because there is no common codec supported on both ends of the session. There are so many codecs in use, and every vendor and service provider seems to have their own favorite one. Fortunately, we should be able to avoid this for audio codecs. The IETF has recently finalized the Opus audio codec for speech and music, published as RFC 6717 this month. It really is a fantastic codec, much better than all the rest, making it an easy choice as one mandatory to implement (MTI) codec for WebRTC. Opus is also available as open source. The other MTI codec is G.711, also known as PCM, which provides interoperability in the VoIP and telephony world, and is also needed for interworking with the telephone network. Video codec choice is much more difficult. While H.264 is widely used today, there are no open source implementations or royalty-free licensing available for browsers or implementors. As such, it is very difficult to see how it could be chosen as a MTI video codec. Google’s VP8 video codec is proposed as an alternative, and is available in open source. However, there is much uncertainty about the licensing status of VP8. Should WebRTC deploy without common video codecs, this again could result in interoperability delays.
Offer/answer interoperability is perhaps the least understood, but most important area. Offer/answer refers to the negotiation of codecs, parameters, and settings for the media session or data channel between the two browsers. Even if both browsers use common APIs, standard protocols, and common codecs, if they are unable to successfully negotiate and configure their media or data channel, the connection will fail. WebRTC uses Session Description Protocol (SDP) to do this offer/answer exchange. The pre-standard WebRTC implementations are, frankly, a mess in this area. Their SDP is not standard, and not interoperable with anything else. It will take a lot of work to get this right, and we all must insist that browser vendors support standard offer/answer negotiations.
Occasionally, it is suggested that perhaps offer/answer would be easier if we didn’t use SDP. We all know and hate SDP, and it is ugly and awkward to use. However, it has taken over a decade’s work and experience to make it work, and any replacement would likely take that many years to get to work. And, in addition, since much of the standards-based VoIP and video world uses SDP, it would need to map to SDP as well. I can’t see this helping interoperability in any way. Previous efforts to replace SDP failed (anyone remember SDPng?) and I think anyone advocating replacing SDP needs to explain why a new effort wouldn’t meet a similar end, and why this effort wouldn’t take a decade. Also, the complexities of offer/answer relate to the complexities of negotiating an end-to-end session, and the actual syntax of the descriptions are a very small part of the complexity.
So WebRTC definitely has some interoperability challenges ahead of it. Fortunately, there are
many experienced engineers who are participating and helping with the effort. As long as the browser vendors take this seriously and don’t play games, I think WebRTC will have good interoperability, which will benefit web developers and web users alike.
If you are interested in WebRTC, you might like my new book “WebRTC: APIs and RTCWEB Protocols of the HTML5 Real-Time Web” published this month by Digital Codex LLC.
Today, I’m excited to announce the publication of my new technical book entitled “WebRTC: APIs and RTCWEB Protocols of the HTML5 Real-Time Web”. The book introduces and explains Web Real-Time Communicatons (RTC), a hot topic in the web and Internet Communications industry right now.
Many of us enjoy services such as Skype, but you have to download the app and install it before you can talk to anyone. WebrRTC browsers have all this built into them – no download, no codecs, no Flash, no plugins needed! This will be really popular with web users. Imagine what Google or Facebook could do with this?
If you want to try WebRTC today, it is already in Google’s Chrome Canary (developers version). There are sites out there today live – I’ll share them in future posts. It will be available in most browsers starting next year.
If you want to learn about WebRTC, you might find my book, written with my co-author Daniel C. Burnett from Voxeo), useful. I enjoyed writing it!
Feel free to interact with us on social media, Google+ or Twitter. Comments, suggestions, and opinions are most welcome.
A year and a half ago I embarked on my first self-publishing experience when I published my first novel, Counting from Zero. I had written several other books before, but they were technical, non-fiction books, and I used conventional publishers who handled so many aspects of the book.
Self-publishing was a revelation for me, and I found that I relished the speed, control, and flexibility it gave me. I have had so many wonderful experiences after publishing the book; now I can hardly imagine that I once thought that perhaps it would never be published!
I am in the homestretch of a new self-publishing experience, which has also been a revelation. This time, I am about to self-publish my first non-fiction, technical book title! Stay tuned for an announcement shortly, perhaps on Monday when I am speaking at an industry conference… perhaps. I won’t talk about the book or the topic today, but I do want to share my experiences, and how it has been similar or different for non-fiction vs fiction.
So firstly, why did I choose to self-publish rather than go back to one of the publishers I had worked with in the past? The same reasons as for my novel, which are:
- Speed: My co-author and I finished editing and writing the book just this week. Next week we will have a box of books in hand and a paperback and Kindle edition for sale on Amazon. It just doesn’t get any better than this, especially when your goal is to publish the first book on a given topic.
- Control: Publishers often influence the content of a technical book. They will suggest adding chapters, or including other points of view. Often this is useful, but in this case, for the first time, this to-be-published book contains exactly what I want, and says it exactly as I want to say it. To paraphrase MasterCard, this is priceless! And, I can control pricing. My previous books have been incredibly expensive – this book will be incredibly cheap.
- Flexibility: Timing is everything in technical book publishing. And the ability to provide timely content at the right time is critical. This book will be up-to-the-minute accurate. In addition, we plan to do frequent new editions to track the fast-moving field. I have done multiple editions of some of my previous books, but usually at 2-3 year intervals. This time, we plan to do new editions in 3-4 month intervals! I know it sounds crazy, and it may turn out to be so, but the point is we can try out this new model, where we put out a book using a software release model, rather than a book edition model.
So, what are the downsides of this do-it-yourself model? Mainly just the work involved! Laying out my fiction book was trivial, but doing the same for my non-fiction book was extremely involved. I had to integrate figures, captions, tables of contents, lists of figures, etc. My publisher provided all these things in the past, but now it was all down to me and and my co-author.
I’m happy to say we have been successful, and initial feedback from our reviewers is very positive. I can hardly wait for Monday! In my mind, there is no doubt this book will be successful, and it will help the industry and fellow professionals learn about new opportunities.
I guess it is obvious that this self-publishing fad is likely to stay, even for technical non-fiction books.
If any of you have had self-publishing experience with a technical book, I’d love to hear your experiences. I’ll keep sharing the lessons I’m learning every day in this incredible experience.