Archive for category Book
Returning to Zero continues the story of the Zed.Kicker botnet, and the efforts of white hat hacker Mick O’Malley and his friends to contain and destroy it.
A lot has changed in those six years, especially in internet security and privacy.
Six years ago, not to many people outside certain internet security areas had ever heard of a ‘botnet’, a robot network of compromised computers. Botnets back then had tens of thousands of computers, which was why my fictional Zed.Kicker botnet with millions of devices was so powerful. Today, there are many botnets with millions.
Six years ago, few also understood how dangerous a large botnet can be, with their distributed denial of service DDOS) attacks. Botnets routinely launch attacks today.
Six years ago, only a paranoid few thought about pervasive surveillance, and the notion that without taking measures, all our activities online were being tracked and recorded by governments, our own and others. In this post-Snowden era (three years ago, believe it or not), we know the extent and the invasiveness of the surveillance. (Just for fun, here is a photo of me asking Edward Snowden a question via a WebRTC video link after a screening of CITIZEN FOUR at IETF-93).
Six years ago, cybercrime was a rarity, including ransomware and other threats. Today is is unfortunately common.
Six years ago, many of us were concerned about our communication security: how to encrypt and authenticate or messaging and calls. The ‘Security and Other Lies’ blog entries in Counting from Zero reflect this emphasis. Today, privacy is a bigger concern, and how to minimize the meta-data about our communication and messaging is discussed in the ‘Privacy and Other Mirages’ blog entries in Returning to Zero.
One thing that hasn’t changed in six years is the excitement and nervous energy involved in launching a new book. I can’t wait for Returning to Zero to be available and to get feedback and comments!
And looking forward another six years? Who knows…
Returning to Zero, the sequel to Counting from Zero, and the second book in the Mick O’Malley series will be available on Amazon as Kindle and Paperback editions on February 25, 2017.
I am very proud of the Third Edition of the WebRTC Book that came out just a few weeks ago. My co-author Dan and I have been working on it for months, and it is always exciting to launch a new edition!
We worked feverishly during the IETF-89 meeting in London to get all the updates finished – all the APIs, protocols, and standards referenced should be up to date as of then (first week in March). We also had a lot of fun testing and doing screen captures of the new Demo Application, which now utilizes the WebRTC data channel for Real-Time Text (RTT) between the two browsers. I’ll write another day about RTT and how much fun it is compared to normal texting or instant messaging in another post. For us, to make use of the data channel APIs and protocols and show the interoperability between Chrome and Firefox browsers was a lot of fun as well.
The Demo Application also can now utilize a TURN server for enhanced NAT traversal. In some circumstances, NATs or firewalls will prevent a direct peer-to-peer Peer Connection from being established between two browsers, and a relay in the cloud is needed. If the Demo Application fails for you, try reloading the page adding a ?turnuri=1 to the URL and see if it works for you!
Also new for this edition is a description of how to analyze WebRTC protocols on your computer using the excellent open source packet capture and analysis tool Wireshark. Between Wireshark and various browser tools (try Tools/Developer Tools in Chrome and Tools/Web Developer in Firefox, or chrome://webrtc-internals in Chrome for lots of useful WebRTC info), you can learn a lot just by playing with WebRTC. If your application is not working, these tools allow you to debug and analyze what is happening.
Finally, Dan’s introduction to the WebRTC API has been greatly expanded with step-by-step introductions to the various functional parts of the client and server code. As always, you can download all of our Demo Application code from our book website, and also see it running as well.
We have received so much excellent feedback in the one and a half years since we published the first edition. We can’t wait to hear from you on what you think of the Third Edition. We enjoy hearing from you on Twitter, Facebook, or Google+.
The giveaway is hosted by goodreads, the social reading site. If you love to read but haven’t found Goodreads, you should check it out!
From now until November 9, you can sign up to win a paperback copy. Winners notified on November 10.
I’ve spent many years of my career working on interoperability in communication systems. Back in the dark ages, I did SS7 interoperability testing. During my CLEC days, I ran a test lab that tested optical, telephony, and ATM/Frame Relay equipment. I’ve spent many years working on interoperability issues with SIP, starting with the SIP call flows (RFC 3665 and RFC 3666) and then SDP Offer answer (RFC 4317). I’ve also been to many SIPits (SIP interoperability events run by the SIP Forum), testing voice and video interoperability.
WebRTC poses some interesting interoperability challenges, but I am hopeful we will get it right.
There are four different areas of interoperability: browser, protocol, codec, and offer/answer. Lets go through them one by one.
Browser interoperability is about aWebRTC application or site working the same regardless of which browser the user is using. In the past browser interoperability was just a browser/server issue, but with the peer-to-peer media and data channel flows of WebRTC, this is now a browser/browser issue. The good news is that there are only a handful of browsers, so the interop matrix is not too large. The bad news is that there are signs of discord already in pre-standards implementations. For one thing, all browsers must utilize the same APIs, or else WebRTC will be a major headache for developers. Of course, libraries can hide this complexity from developers, but this will slow down deployment and produces some needlessly bad user experiences. If we see one browser vendor using their own APIs instead of using the standard ones from the W3C, then we will know that someone is playing company games at the expense of the Internet users of the world. Hopefully this won’t happen, but it if does, users will and developers will likely move away from that browser.
Protocol interoperability is a major concern for WebRTC. In the past, browsers didn’t implement many protocols – everything used HTTP (Hyper-Text Transport Protocol). Today, browsers are doing more, including WebSockets, and will soon move to the next version of HTTP, 2.0. With WebRTC, the browser RTC Function has to implement multiple protocols including RTP, ICE, STUN, TURN, SCTP etc. These protocols define “bits on the wire” and “state machines” that ensure that interoperability works. For browser-to-browser media and data channels to work, browsers must implement these protocols and carefully follow the standards. If they don’t the whole industry will suffer. There are some issues today with the pre-standard WebRTC browser implementations. For example, one browser today implements a proprietary STUN client that will not work with standard STUN servers. Browser vendors will need to take protocol interoperability very seriously, and recognize that this is something new for them and that they need to follow industry best practices and approaches.
Codec interoperability is about ensuring that media sessions don’t fail because there is no common codec supported on both ends of the session. There are so many codecs in use, and every vendor and service provider seems to have their own favorite one. Fortunately, we should be able to avoid this for audio codecs. The IETF has recently finalized the Opus audio codec for speech and music, published as RFC 6717 this month. It really is a fantastic codec, much better than all the rest, making it an easy choice as one mandatory to implement (MTI) codec for WebRTC. Opus is also available as open source. The other MTI codec is G.711, also known as PCM, which provides interoperability in the VoIP and telephony world, and is also needed for interworking with the telephone network. Video codec choice is much more difficult. While H.264 is widely used today, there are no open source implementations or royalty-free licensing available for browsers or implementors. As such, it is very difficult to see how it could be chosen as a MTI video codec. Google’s VP8 video codec is proposed as an alternative, and is available in open source. However, there is much uncertainty about the licensing status of VP8. Should WebRTC deploy without common video codecs, this again could result in interoperability delays.
Offer/answer interoperability is perhaps the least understood, but most important area. Offer/answer refers to the negotiation of codecs, parameters, and settings for the media session or data channel between the two browsers. Even if both browsers use common APIs, standard protocols, and common codecs, if they are unable to successfully negotiate and configure their media or data channel, the connection will fail. WebRTC uses Session Description Protocol (SDP) to do this offer/answer exchange. The pre-standard WebRTC implementations are, frankly, a mess in this area. Their SDP is not standard, and not interoperable with anything else. It will take a lot of work to get this right, and we all must insist that browser vendors support standard offer/answer negotiations.
Occasionally, it is suggested that perhaps offer/answer would be easier if we didn’t use SDP. We all know and hate SDP, and it is ugly and awkward to use. However, it has taken over a decade’s work and experience to make it work, and any replacement would likely take that many years to get to work. And, in addition, since much of the standards-based VoIP and video world uses SDP, it would need to map to SDP as well. I can’t see this helping interoperability in any way. Previous efforts to replace SDP failed (anyone remember SDPng?) and I think anyone advocating replacing SDP needs to explain why a new effort wouldn’t meet a similar end, and why this effort wouldn’t take a decade. Also, the complexities of offer/answer relate to the complexities of negotiating an end-to-end session, and the actual syntax of the descriptions are a very small part of the complexity.
So WebRTC definitely has some interoperability challenges ahead of it. Fortunately, there are
many experienced engineers who are participating and helping with the effort. As long as the browser vendors take this seriously and don’t play games, I think WebRTC will have good interoperability, which will benefit web developers and web users alike.
If you are interested in WebRTC, you might like my new book “WebRTC: APIs and RTCWEB Protocols of the HTML5 Real-Time Web” published this month by Digital Codex LLC.
Today, I’m excited to announce the publication of my new technical book entitled “WebRTC: APIs and RTCWEB Protocols of the HTML5 Real-Time Web”. The book introduces and explains Web Real-Time Communicatons (RTC), a hot topic in the web and Internet Communications industry right now.
Many of us enjoy services such as Skype, but you have to download the app and install it before you can talk to anyone. WebrRTC browsers have all this built into them – no download, no codecs, no Flash, no plugins needed! This will be really popular with web users. Imagine what Google or Facebook could do with this?
If you want to try WebRTC today, it is already in Google’s Chrome Canary (developers version). There are sites out there today live – I’ll share them in future posts. It will be available in most browsers starting next year.
If you want to learn about WebRTC, you might find my book, written with my co-author Daniel C. Burnett from Voxeo), useful. I enjoyed writing it!
Feel free to interact with us on social media, Google+ or Twitter. Comments, suggestions, and opinions are most welcome.
It is just over a month since Amazon announced KDP Select, opening their Kindle Owners Lending Library to independent publishers. After deliberating the pros and cons, I took the plunge, giving the program a try. It has certainly been interesting!
Today, Amazon announced the results of the program so far. First, I’ll share my experience with the program during this month.
After I signed up my techno thriller Counting from Zero, reluctantly saying goodbye to Smashwords, I didn’t have long to wait – the borrows started happening immediately. After Christmas, I saw another wave of borrows, presumably new Kindle owners. Then, in the first few days of the month, another surge. (I presume this means that borrows are done on a calendar month rather than 30 day periods. If this is true, we will often see lots of borrows at the start of the month.)
In the 3 weeks of December KDP Select was active, for my eBook, borrows represented 18% of Amazon activity (sales plus borrows). For January so far, the percentage is about 16%, but with higher numbers of both sales and borrows. I’d estimate overall sales seem to be up about 25% since Christmas. Since my sales have increased but my sales ranking has not, this seems to be a general trend, at least in my category. So, looking at my numbers, since the non-Amazon eBooks sales I gave up to participate in KDP Select only accounted for 5% of my total sales, I appear to be ahead of the game, at least in terms of numbers. But the question was what would publishers get paid for borrows? Amazon did not commit to any royalty rate when the program was launched, instead saying authors would share a $500,000 pot of money based on borrowing numbers.
Amazon answered that question today in announcing that KDP Select authors will receive $1.70 for each borrow in December, based on 295,000 borrows in December. For my relatively low-priced eBook of $2.99, this isn’t much lower than my normal royalty for a sale, which is about $2. I have yet to try out a free book giveaway day, so I can’t share my experience with this aspect of KDP Select, but I hope to soon.
So, one month in, I do not regret my decision to give KDP Select a try. I see no reason why I won’t renew (re-enlist?) in two more months. However, I am still unhappy about the exclusivity requirement, as are many other independent publishers. Amazon, if you are paying attention, this requirement just stinks and you should drop it.
How was your month with KDP Select?
As 2011 draws to a close, I wanted to take a moment to thank everyone who has helped me this year. It has been an amazing year! Here’s a short list of my highlights:
– In January I gave a SIP Tutorial for the FCC staff in DC. It was a great event, and hopefully I will get another chance to do it again in 2012. The FCC has lots of VoIP and SIP work to do with the transition of the PSTN and E911 to all VoIP. Hopefully we can soon end the ridiculous subsidies for rural telephone service and instead use them to subsidized high speed Internet service for rural areas. My friend Henning Schulzrinne was just appointed Chief Technology Officer, so I know the FCC is in good hands technically. I also enjoyed giving the SIP Tutorial in Miami, Sydney, and Austin.
– In February I published my first novel, a Techno thriller about a massive attack on the Internet that gives this blog its name – Counting from Zero. Little did I know how much hacking and security stories there would be in 2011. Some have even called 2011 the Year of the Hactivist, which is hard to argue with. Overall, I couldn’t be happier with the response to the book. Thank you do much to anyone who has read, reviewed, tweeted, or blogged about it – I am very grateful. Look for more book news in early 2012…
– In March I participated in my first robotics competition. The experience was amazing, and I look forward to the start of another build season in just over a week!
– In April, the ZRTP VoIP media security protocol was published as an RFC by the IETF, after 6 years of hard work. Editing this document is my small contribution to making the Internet more secure. Here’s to more adoption and deployment in 2012.
– In May the RTCWEB Working Group was chartered by the IETF. The work is progressing slowly but steadily. I expect more progress in 2012, and hope for some strong security to be built into the protocols – lets show that we have learned something over the years…
– In June, I participated in the first ever SIP Network Operators Conference or SIPNOC for short. It was a great success and really shows how SIP has grown up. I am privaleged to have another term on the Board of Directors of the SIP Forum. With the publication of SIPconnect the SIP Trunking recommendation, the business use of SIP continues to grow and expand.
– In November, I has my first experience as a cricket coach. My son started the Priory Amateur Cricket Association or PACA as a club at his school. It has been a blast so far helping the boys learn the basics of cricket. They have done a great job, although we need to reduce the number of no balls! In 2012 we plan to play a one day match against a local cricket club.
So, here’s to 2011 – it was definitely an interesting year! I hope it was a good one for you and yours. Here’s to 2012!
Next month, I’m excited to be giving a public lecture sponsored by The Tuesday Women’s Association (TWA) and the American Association of University Women (AAUW). It is part of their 2012 International Relations Lecture Series and is entitled Cyberspace: A New Cold War Front. It will be held on January 10, 2012 at 10:45am at the Ethical Society building on 9001 Clayton Rd., St. Louis, MO 63117.
I’m really looking forward to it. I’m used to lecturing at Washington University, and giving industry tutorials, and making business and standards body presentations, but a public lecture like this is is something different!
And this is a really interesting topic, too. I’ll be talking about Stuxnet, and other industrial cyber espionage. I’ll get to talk about the attacks on Google originating from China. I’ll talk about hacking as a weapon in various conflicts between Russia and former Soviet republics.
Of course, I’ll try to educate about computer and Internet security, drawing some examples from my techno thriller cyber crime mystery Counting from Zero. While it is mainly about cyber crime for profit, the techniques and attacks are similar.
If you are in St Louis, it would be great to see you there. If not, maybe I’ll post a recording or at least my slides on this blog.
Today I ditched a long time partner, Smashwords. I feel really, really bad. I remember clearly the day I found the site and realized I could use this one excellent site for distributing my eBook on multiple platforms: iBooks, Nook, Diesel, Kobo, Sony, etc. I loved the way I could generate free download coupons for my eBook. I raved about Smashwords on this blog. Between Smashwords and Amazon KDP (Kindle Direct Publishing), I had my eBook publishing bases covered.
As of today, I am using Amazon KDP exclusively to distribute my eBook, Counting from Zero.
Why? Because of the terms of the new KDP Select program Amazon launched today. In exchange for forsaking Smashwords (and all others), my eBook will be a part of Amazon’s Kindle Owners’ Lending Library, a brand new part of their Prime service. Users of this service get to “borrow” one eBook per month for free. Authors and publishers get no royalty, but instead will split a slush fund from Amazon based on their books share of lending. How much will this be? No one knows – it depends on the degree to which users adopt this new model. There is also the opportunity to offer my eBook for free promotions, as well.
Why did I decide to participate? Well, the financial calculation was trivial. As the pie chart shows, 88% of my sales have been eBooks on KDP, with 7% paperbacks (on Amazon and B&N), and just 5% eBooks through Smashwords. To give up those 5% sales to add a new distribution channel is an easy calculation. Also, I just love being able to participate in the disruption of the publishing industry, and it will be a very interesting ride the next few months to see if this takes off.
Despite the title of this blog (apologies to Dr. Strangelove), I do still worry about Amazon. Their power in the publishing industry is growing exponentially. If the Kindle Fire takes off and lending as well, it will give Amazon even more leverage. I really, really don’t like the exclusive requirement for Kindle Select. It feels awful to say goodbye to Smashwords, a site that has been extremely useful to me this year.
So, here it is – it will be interesting to see how it goes!
Last night I was interviewed on KMOV-TV Channel 4 in St. Louis about smartphone hacking. I was asked by Jasmine Huda to comment about an article in USA Today “Hackers prey on smartphone use at work during holidays” and about the general issue of smartphone hacking.
The USA Today article is primarily about users whose smartphone connects to both their corporate accounts and their personal accounts. The angle was that the smartphone becomes a new attack vector to penetrate corporate networks via the personal accounts on these devices. While this attack seems plausible in theory and will no doubt happen, it is hardly widespread today. I commented that smartphone hacking is definitely on the rise, with Android devices and their open ecosystem most common, while at the other end of the spectrum is the iPhone with its closed ecosystem and minimal hacking reported. However, there is still the potential for iPhone hacking as demonstrated recently by Charlie Miller who got his application accepted in the App Store despite having malware in it.
Besides paying attention to what apps you run and what links you follow, you also need to pay attention to the physical security of your smartphone. With so much personal information stored in it, having a smartphone password protected is a must, as is the ability to remotely wipe the phone if lost. In my technothriller novel Counting from Zero, the main character Mick O’Malley temporarily loses possession of his smartphone. Being the overly paranoid type, he immediately discards the phone hardware, replaces it, then reinstalls all his information on it.
Today, a bigger concern than smartphone hacking is smartphone privacy, and the personal information that apps are routinely sharing without really informing the user, but this is a topic for another day.