msgbartop
All about Google Chrome & Google Chrome OS
msgbarbottom

14 Jun 12 Chrome gets MBP Retina Display support in beta release


Apple may only have outed its Retina Display MacBook Pro on Monday, but Google’s Chrome engineers already have a version of the browser ready to handle the 2880 x 1800 high-resolution screen. “We’re committed to polishing Chrome until it shines on [the new Pro]” the Chrome team wrote this week, releasing an early version of the browser with basic Retina support, and promising more soon.

Right now, the developer version of Chrome has “basic high-resolution support” but the software engineers concede that they “have further to go over the next few weeks.” You can download the Chrome Canary release here, but be warned it may not be as stable as the regular version.

Poor support from non-Apple apps is one of the biggest issues with the new MacBook Pro with Retina Display we discovered in our review. Apple has brought most of its key OS X apps up to speed with the high-resolution panel – Safari, iMovie, iTunes, iPhoto and more are on the list – and some third-party developers have also been working hard, but many apps and sites still look pixelated and underwhelming.

That will take time to address, though we’re pleased to see Google reacting quickly since so many Mac users aren’t willing to live solely with Safari.

Article source: http://www.slashgear.com/chrome-gets-mbp-retina-display-support-in-beta-release-14233958/

Tags: ,

14 Jun 12 Google promises Chrome support for MacBook Pro with Retina Display


By AppleInsider Staff

Published: 09:20 PM EST (06:20 PM PST)

Google revealed on Wednesday that it is “committed to polishing” its Chrome browser to take advantage of Apple’s new MacBook Pro with Retina Display.

Nico Weber, a Google Software Engineer and “Chief Apple Polisher,” posted the promise to the company’s official Chrome blog along with a screenshot of the “early results” of high-resolution support in Chrome.

“We have further to go over the next few weeks, but were off to the races to make Chrome as beautiful as it can be,” he said.

In fact, Google has already begun testing the new polish the Canary developer version of Chrome. Anandtech’s Anand Lal Shimpi said that text in Chrome Canary is “no longer ugly,” compared to the “nasty result” from the current version of Chrome. According to Lal Shimpi, Chrome’s results come because it uses Apple’s text display API but renders to a Retina-unaware “offscreen canvas before scaling the text and displaying it on a web page.”

Though Chrome Canary addresses the rendering issue, Lal Shimpi did note it still “renders text differently” from Apple’s Safari.

Chrome
Source: Google

Chrome vs Safari
Left: Chrome; Middle: Chrome Canary; Right: Safari | Source: Anandtech

Apple released the new MacBook Pro on Monday at the Worldwide Developers Conference. The 15-inch laptop’s new Retina Display features a resolution of 2,880 by 1,800 pixels. Demand for the laptop is currently outstripping supply, as shipping estimates for it on Apple’s Website are currently at three to four weeks.

Retina Display-optimized updates of Apple’s own Mac software have begun steadily rolling out. For instance, Apple released new versions of Final Cut Pro X, Aperture, and iPhoto on Monday.

Article source: http://www.appleinsider.com/articles/12/06/13/google_promises_upcoming_chrome_support_for_macbook_pro_with_retina_display.html

Tags: , , , , ,

10 May 12 Inspecting WebSocket Traffic with Chrome Developer Tools – SYS


What makes working with WebSockets challenging at times is that the messages are extremely tiny and incredibly fast – making it hard to see them.

With the updated Chrome Dev Tools,  you can now see the WebSocket traffic coming and going to and from your browser without using tools like Wireshark. Here are the simple steps to make the invisible visible:

1. At the time of writing this post (May 8, 2012), you need to get Chrome Canary or a fresh Chromium build.
2. Navigate to the Echo demo, hosted on the websocket.org site.
3. Turn on the Chrome Developer Tools.
4. Click Network, and to filter the traffic shown by the Dev Tools, click WebSockets (all the way on the bottom).
5. In the Echo demo, click Connect.

6. Click www.websocket.org on the left, representing the WebSocket connection.
7. Make sure you’re on the Headers tab. This tab shows the WebSocket handshake.

Request URL:ws://echo.websocket.org/?encoding=text
Request Method:GET
Status Code: 101 Web Socket Protocol Handshake

Request Headers

Connection:Upgrade
Cookie:__utma=9925811.1340073179.1336513627.1336513627.1336513627.1; __utmb=9925811.4.10.1336513627; __utmc=9925811; __utmz=9925811.1336513627.1.1.utmcsr=websocket.org|utmccn=(referral)|utmcmd=referral|utmcct=/
Host:echo.websocket.org
Origin:http://www.websocket.org
Sec-WebSocket-Extensions:x-webkit-deflate-frame
Sec-WebSocket-Key:DIbT9axdUEPm89HWFqMAZA==
Sec-WebSocket-Version:13
Upgrade:websocket
(Key3):00:00:00:00:00:00:00:00
Query String Parameters view URL encoded
encoding:text

Response Headers

Access-Control-Allow-Credentials:true
Access-Control-Allow-Headers:content-type
Access-Control-Allow-Origin:http://www.websocket.org
Connection:Upgrade
Date:Tue, 08 May 2012 22:14:46 GMT
Sec-WebSocket-Accept:rKTyKcnJ105fv4ebnspiYbCB9ns=
Server:Kaazing Gateway
Upgrade:WebSocket
(Challenge Response):00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00

8. Click the Send button in the Echo demo.

9. 
THIS STEP IS IMPORTANT: To see the WebSocket frames in the Chrome Developer Tools, under Name/Path, click the echo.websocket.org entry, representing your WebSocket connection. This refreshes the main panel on the right and makes the WebSocket Frames tab show up with the actual WebSocket message content.
Note: Every time you send or receive new messages, you have to refresh the main panel by clicking on the echo.websocket.org entry on the left.

The little arrow indicates the direction of the message, after the timestamp, op code, and mask you see the length and contents of the WebSocket message.

Article source: http://www.sys-con.com/node/2274190

Tags: , , , , ,

20 Jan 12 Hands on: building an HTML5 photo booth with Chrome's new webcam API


Experimental support for WebRTC has landed in the Chrome developer channel. The feature is available for testing when users launch the browser with the --enable-media-stream flag. We did some hands-on testing and used some of the new JavaScript APIs to make an HTML5 photo booth.

WebRTC is a proposed set of Web standards for real-time communication. It is intended to eventually enable native standards-based audio and video conferencing in Web applications. It is based on technology that Google obtained in its 2010 acquisition of Global IP Solutions and subsequently released under a permissive open source software license.

Implementations of WebRTC will consist of two parts: a wire protocol for network communication and an assortment of JavaScript APIs that will allow the WebRTC functionality to be used in Web applications. The WebRTC JavaScript API proposal is being drafted by the W3C Web Real-Time Communications Working Group, building on a specification was originally written by Google’s Ian Hickson. The underlying network protocol standard is being drafted separately through IETF.

One of the key features defined in the WebRTC specification is the MediaStream object, a generic JavaScript interface for interacting with live audio and video streams. This functionality can be used for a broad number of potential applications beyond audio and video conferencing.

As we reported on Thursday, Mozilla is drafting a specification called MediaStream Processing that defines JavaScript APIs for real-time programmatic manipulation of MediaStream instances. Mozilla’s proposed standard would make it possible for Web developers to use MediaStream in the browser for tasks like audio mixing and motion detection on live video. It’s important to note that MediaStream Processing is a separate standard from WebRTC, though it relies on the JavaScript APIs that are defined in the WebRTC specification.

In order to facilitate audio and video conferencing, the WebRT JavaScript APIs have to provide a mechanism through which Web applications can access the end user’s webcam and microphone. The specification defines a function called getUserMedia, which does precisely that. If the relevant hardware is present and available for use, getUserMedia will trigger a callback function and pass along a MediaStream instance that mediates live access to a stream from a webcam or microphone.

The getUserMedia feature is especially significant, partly because such functionality was previously only available through proprietary browser plug-ins. Used in conjunction with MediaStream Processing, the ability to take a live MediaStream from a webcam offers some compelling opportunities. As we wrote in our coverage of MediaStream Processing on Thursday, one example is that it will allow Web developers to build standards-based augmented reality experiences that run entirely within the browser.

The getUserMedia function is among the WebRTC features that are now available in the Chrome developer channel when the browser is launched with the --enable-media-stream flag. We started by throwing together a really simple demo so that we could see how it works in action:

html
  head
    titleHTML5 Webcam Test/title
  /head
  body

    h2The Thing cannot be describedmdash;there is no
    language for such abysms of shrieking and immemorial
    lunacy, such eldritch contradictions of all matter,
    force, and cosmic order/h2

    video id="live" autoplay/video
    script type="text/javascript"
      video = document.getElementById("live")

      navigator.webkitGetUserMedia("video",
          function(stream) {
            video.src = window.webkitURL.createObjectURL(stream)
          },
          function(err) {
            console.log("Unable to get video stream!")
          }
      )
    /script
  /body
/html

The getUserMedia function takes three parameters. The first parameter is a string that is used to indicate whether audio or video is desired. In this case, we specify “video” so that we can access the user’s webcam. The second parameter is a callback function that is invoked when the function successfully obtains the webcam stream. The third parameter is a callback function that is invoked upon failure.

The success callback is passed one parameter, a MediaStream instance that provides access to a live video stream from the user’s webcam. In the callback function, we call createObjectURL to create a Blob URL for the stream. When we set the blob URL as the video element’s source, it will display the contents of the webcam MediaStream in real time.

The getUserMedia function is intended to have a security prompt that asks users for permission before making the webcam accessible to a Web application. This prompt will likely be similar to the one that the browser already uses when a Web application calls upon the standard geolocation APIs to request the user’s position.

The getUserMedia security prompt has not been implemented yet in Chrome, so the browser provides immediate webcam access without user intervention. This security weakness will almost certainly be remedied before Google makes the feature available without a launch flag. For now, remember to use caution when browsing with the flag enabled. (For an overview of WebRTC security considerations, you can refer to this IETF slide deck)

I tested the example above in Chrome Canary on a 2011 MacBook Air connected to a Thunderbolt display. The browser was able to pick up a live video stream from the webcam that is built into the monitor. It works exactly as expected, though it was a bit CPU-intensive. As you can see in the screenshot, I enlisted Cthulhu’s help to test the demo.

Displaying live video from a webcam is a good starting point, but it’s hardly enough for a good demo. I decided to go a step further and expand it into a simple photo booth that can capture snapshots when the user clicks a link. It accomplishes this by painting a single frame of the video to a hidden canvas element and then extracting the image data, which is then plopped into a new image element that is appended to the film roll at the bottom of the page.

html
  head
    titleHTML5 Photo Booth/title
  /head
  body
    h2HTML5 Photo Booth/h2

    video id="live" autoplay/video
    canvas id="snapshot" style="display:none"/canvas

    pa href="#" onclick="snap()"Take a picture!/a/p
    div id="filmroll"/div

    script type="text/javascript"
      video = document.getElementById("live")

      navigator.webkitGetUserMedia("video",
          function(stream) {
            video.src = window.webkitURL.createObjectURL(stream)
          },
          function(err) {
            console.log("Unable to get video stream!")
          }
      )

      function snap() {
        live = document.getElementById("live")
        snapshot = document.getElementById("snapshot")
        filmroll = document.getElementById("filmroll")

        // Make the canvas the same size as the live video
        snapshot.width = live.clientWidth
        snapshot.height = live.clientHeight

        // Draw a frame of the live video onto the canvas
        c = snapshot.getContext("2d")
        c.drawImage(live, 0, 0, snapshot.width, snapshot.height)

        // Create an image element with the canvas image data
        img = document.createElement("img")
        img.src = snapshot.toDataURL("image/png")
        img.style.padding = 5
        img.width = snapshot.width / 2
        img.height = snapshot.height / 2

        // Add the new image to the film roll
        filmroll.appendChild(img)
      }
    /script
  /body
/html

The WebRTC standard is still evolving, so the API will likely undergo changes before it is finalized. The Chrome developer channel offers a great test environment for Web developers who want to start experimenting with MediaStream functionality. Opera also has a custom test build available with getUserMedia enabled.

Mozilla is working to add support for WebRTC to Firefox. As we demonstrated yesterday, they have basic MediaStream support implemented. They do not, however, have support for getUserMedia yet. It’s worth noting that Mozilla is also developing an independent Camera API standard specifically for capturing from webcams and built-in cameras on mobile devices.

Ericsson Labs has also been doing a lot of work with WebRTC. They have a fairly sophisticated implementation that is built on top of WebKitGtk+, the WebKit port that is used by the GNOME desktop environment and many popular Gtk+ applications on Linux. Ericsson’s WebRTC-enabled version of WebKitGtk+ can be used with GNOME’s Epiphany Web browser to test WebRTC capabilities on Linux. You can see it running a full-blown, browser-based video conferencing demo on Ubuntu in this video.

WebRTC is clearly on track to deliver interactive browser-based audio and video conferencing with Web standards. Popular tools like WebEx, Google+ Hangouts, and Facebook video chat could all eventually be rebuilt to run natively in the browser without requiring plug-ins. Even more compelling is the prospect of having WebRTC and MediaStream processing available in mobile Web browsers. Imagine being able to have the kind of functionality that you get in Instagram, Layar, and Facetime available in mobile Web application.

Article source: http://arstechnica.com/business/news/2012/01/hands-on-building-an-html5-photo-booth-with-chromes-new-webcam-api.ars?utm_source=rss&utm_medium=rss&utm_campaign=rss

Tags: , , ,