Wednesday, June 6, 2012

Google New Method Access Using Images Patent Filed

Build your Wesite, Online Store, Blog and More - 10% off Your Order at GoDaddy.com


United States Patent8,196,198
EgerJune 5, 2012

Access Using Images 



Abstract

A computer-implemented method may include presenting to a user multiple images, multiple identifiers and a challenge to select one of the identifiers for each of the images, receiving the selected identifiers from the user, and providing access to a computing service based on a comparison of the selected identifiers to an answer to the challenge.

Inventors:Eger; David Thomas (Burlingame, CA)
Assignee:Google Inc. (Mountain View, CA) 
Appl. No.:12/345,265
Filed:December 29, 2008

Current U.S. Class:726/21 ; 726/2; 726/7
Current International Class:G06F 7/04 (20060101)
Field of Search:726/2,4,17,21,27 713/155-159,168-186 380/247-250 705/44




References Cited [Referenced By]




U.S. Patent Documents

6128397October 2000Baluja et al.
6195698February 2001Lillibridge et al.
6295387September 2001Burch
6956966October 2005Steinberg
7149899December 2006Pinkas
7266693September 2007Potter
7653944January 2010Chellapilla
7656402February 2010Abraham et al.
7841940November 2010Bronstein
7891005February 2011Baluja et al.
7908223March 2011Klein et al.
7921454April 2011Cerruti
7929805April 2011Wang et al.
8019127September 2011Misra
8073912December 2011Kaplan
8090219January 2012Gossweiler et al.
8103960January 2012Hua et al.
8136167March 2012Gossweiler et al.
2002/0141639October 2002Steinberg
2004/0073813April 2004Pinkas et al.
2004/0199597October 2004Libbey et al.
2005/0014118January 2005von Ahn Arellano
2005/0065802March 2005Rui et al.
2005/0229251October 2005Chellapilla et al.
2006/0167874July 2006von Ahn Arellano et al.
2007/0130618June 2007Chen
2007/0201745August 2007Wang et al.
2008/0050018February 2008Koziol
2008/0216163September 2008Pratte et al.
2008/0244700October 2008Osborn et al.
2009/0094687April 2009Jastrebski
2009/0113294April 2009Sanghavi et al.
2009/0138468May 2009Kurihara
2009/0138723May 2009Nyang
2009/0150983June 2009Saxena et al.
2009/0235178September 2009Cipriani et al.
2009/0249476October 2009Seacat et al.
2009/0249477October 2009Punera
2009/0319274December 2009Gross
2009/0325696December 2009Gross
2009/0328150December 2009Gross
2010/0077210March 2010Broder et al.
2010/0100725April 2010Ozzie et al.


Foreign Patent Documents

2008/091675Jul., 2008WO



Other References


Chellapilla, K., et al. "Computers Beat Humans at Single Character Recognition in Reading Based Human Interaction Proofs (HIPs)," in Proceedings of the 2nd Conference on Email and Anti-Spam, (CEAS) 2005. cited by other .
Rowley, H., et al. "Rotation Invariant Neural Network-Based Face Detection," CMU-CS-97-201, Dec. 1997. cited by other .
Fu, H., et al. "Upright Orientation of Man-Made Objects," SIGGRAPH 2008, 35th International Conference and Exhibition on Computer Graphics and Interactive Techniques, Aug. 2008. cited by other .
Lopresti, D., "Leveraging the CAPTCHA Problem," 2nd Int'l Workshop on Human Interactive Proofs, Bethleham, PA, May 2005. cited by other .
Rowley, H., et al. "Neural Network-Based Face Detection," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 20, No. 1, Jan. 1998. cited by other .
Mori, G., et al. "Recognizing Objects in Adversarial Clutter: Breaking a Visual CAPTCHA," Proceedings of Computer Vision and Pattern Recognition, 2003. cited by other .
Rui, Y., et al. "Characters or Faces: A User Study on Ease of Use for HIPs," Lecture Notes in Computer Science, vol. 3517, pp. 53-65, Springer Berlin, 2005. cited by other .
Vailaya, A., et al. "Automatic Image Orientation Detection," IEEE Transactions on Image Processing, vol. 11, No. 7, pp. 746-755, Jul. 2002. cited by other .
Baluja, S., et al. "Large Scale Performance Measurement of Content-Based Automated Image-Orientation Detection," IEEE Conference on Image Processing, vol. 2, pp. 514-517, Sep. 11-14, 2005. cited by other .
Viola, P., et al. "Rapid Object Detection Using a Boosted Cascade of Simple Features," Proceedings of Computer Vision and Pattern Recognition, pp. 511-518, 2001. cited by other .
Von Ahn, L., et al. "Telling Humans and Computers Apart (Automatically) or How Lazy Cryptographers do AI," Communications on the ACM, vol. 47, No. 2, Feb. 2004. cited by other .
Von Ahn, L., et al. "CAPTCHA: Using Hard AI Problems for Security," Advances in Cryptology--EUROCRYPT 2003, Springer Berlin, 2003. cited by other .
Von Ahn, L., et al. "Labeling Images With a Computer Game," Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ACM, pp. 319-326, Vienna, Austria, 2004. cited by other .
Von Ahn, L., et al. "Improving Accessibility of the Web With a Computer Game," Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ACM, pp. 79-82, Montreal, Quebec, Canada, 2006. cited by other .
Von Ahn, L. "Games With a Purpose," IEEE Computer, pp. 96-98, Jun. 2006. cited by other .
Wu, V., et al. "Textfinder: An Automatic System to Detect and Recognize Text in Images," Computer Science Department, Univ. of Massachusetts, Nov. 18, 1997. cited by other .
Wu, V., et al. "Finding Text in Images," Proceedings of the 2nd ACM Int'l Conf. on Digital Libraries, 1997. cited by other .
Zhang, L., et al. "Boosting Image Orientation Detection With Indoor vs. Outdoor Classification," IEEE Workshop on Application of Computer Vision, pp. 95-99, Dec. 2002. cited by other .
Elson, J., et al. "Asirra: A CAPTCHA that Exploits Interest-Aligned Manual Image Categorization," CCS '07, 9 pages, Oct. 2007. cited by other .
Praun, E., et al. "Lapped Textures," ACM SIGGRAPH 2000, 6 pages, 2000. cited by other .
Adamchak, et al, "A Guide to Monitoring and Evaluating Adolescent Reproductive Health Programs", Pathfinder International, Focus on Young Adults, 2000, pp. 265-274. cited by other .
Siegle, D., "Sample Size Calculator", Neag School of Education--University of Connecticut, retrieved on Sep. 18, 2008, from http://www.gifted.uconn.edu/siegle/research/Samples/samplecalculator.htm, 2 pages. cited by other .
"Sampling Information", Minnesota Center for Survey Research--University of Minnesota, 2007, 4 pages. cited by other .
U.S. Appl. No. 12/256,827, filed Oct. 23, 2008. cited by other .
U.S. Appl. No. 12/254,312, filed Oct. 20, 2008. cited by other .
U.S. Appl. No. 12/486,714, filed Jun. 17, 2009. cited by other .
U.S. Appl. No. 12/345,265, filed Dec. 29, 2008. cited by other .
U.S. Appl. No. 12/254,325, filed Oct. 20, 2008. cited by other .
Chew, et al., "Collaborative Filtering CAPTCHA's", HIP 2005, LNCS 3517, May 20, 2005, pp. 66-81. cited by other .
Extended EP Search Report for EP Application No. 08713263.5, mailed Feb. 4, 2011, 9 pages. cited by other .
Lopresti, "Leveraging the CAPTCHA Problem", HIP 2005, LNCS 3517, May 20, 2005, pp. 97-110. cited by other .
Shirali-Shahrea, "Collage CAPTCHA", IEEE 2007, 4 pages. cited by other .
Shirali-Shahrea, "Online Collage CAPTCHA", WIAMIS '07: Eight International Workshop on Image Analysis for Multimedia Interactive Services, 2007, 4 pages. cited by other .
Xu, et al., "Mandatory Human participation: a new authentication scheme for building secure systems", Proceedings of the 12th International Conference on Computer Communications and Networks, Oct. 20, 2003, pp. 547-552. cited by other .
"Figure", The American Heritage Dictionary of the English Language, 2007, retrieved on Aug. 13, 2011 from http://www.credoreference.com/entry/hmdictenglang/figure, 4 pages. cited by other .
First Office Action for Chinese Patent Application No. 200880002917.8 (with English Translation), mailed May 12, 2011, 7 pages. cited by other .
Non-Final Office Action for U.S. Appl. No. 12/606,465, mailed Aug. 19, 2011, 25 pages. cited by other .
Non-Final Office Action for U.S. Appl. No. 12/254,325, mailed Sep. 1, 2011, 17 pages. cited by other .
Restriction Requirement for U.S. Appl. No. 12/254,312, mailed Sep. 14, 2011, 5 pages. cited by other .
Restriction Requirement Response for U.S. Appl. No. 12/254,312, filed Oct. 14, 2011, 1 page. cited by other .
Notice of Allowance for U.S. Appl. No. 12/254,312, mailed Nov. 7, 2011, 19 pages. cited by other .
Office Action for European Application No. 08713263.5, mailed Dec. 23, 2011, 4 pages. cited by other .
Final Office Action for U.S. Appl. No. 12/254,325, mailed Feb. 10, 2012, 15 pages. cited by other .
Non-Final Office Action for U.S. Appl. No. 12/486,714, mailed Mar. 2, 2012, 16 pages. cited by other.

Primary Examiner: Zand; Kambiz 
Assistant Examiner: Mohammadi; Fahimeh 
Attorney, Agent or Firm: Brake Hughes Bellermann LLP


Claims




What is claimed is:

1. A computer-implemented method, comprising: presenting to a user multiple images, multiple identifiers and a challenge to select one of the identifiers for each of the images, wherein: the images represent objects that are identifiable by a specific identifier, and more identifiers than images are presented in a single access attempt with the resented identifiers including the specific identifiers that identify the presented images and non-specific identifiers that do not identify the presented images; receiving the selected identifiers from the user from among the presented identifiers; and providing access to a computing service based on a comparison of the selected identifiers to an answer to the challenge when the selected identifiers match the specific identifiers to the presented images.

2. The computer-implemented method as in claim 1 wherein the images are three dimensional models.

3. The computer-implemented method as in claim 1 wherein the images are randomly textured, three dimensional models.

4. The computer-implemented method as in claim 1 wherein the images are three dimensional models with each of the three dimensional models set against a separate, randomly generated background.

5. The computer-implemented method as in claim 1 wherein the images are randomly rotated, three dimensional models.

6. The computer-implemented method as in claim 1 wherein the images are randomly colored, three dimensional models.

7. The computer-implemented method as in claim 1 wherein the images are randomly textured, three dimensional models with each of the three dimensional models set against a separate, randomly generated background.

8. The computer-implemented method as in claim 1 wherein at least two times more of the identifiers are presented than the images.

9. The computer-implemented method as in claim 1 wherein at least three times more of the identifiers are presented than the images.

10. The computer-implemented method as in claim 1 wherein providing access to the computing service comprises unlocking a mobile computing device.

11. The computer-implemented method as in claim 1 wherein providing access to the computing service comprises serving to the user a web page.

12. A computer-readable storage device having recorded and stored thereon instructions that, when executed, perform the actions of: presenting to a user multiple images, multiple identifiers and a challenge to select one of the identifiers for each of the images, wherein: the images represent objects that are identifiable by a specific identifier, and more identifiers than images are presented in a single access attempt with the presented identifiers including the specific identifiers that identify the presented images and non-specific identifiers that do not identify the presented images; receiving the selected identifiers from the user from among the presented identifiers; and providing access to a computing service based on a comparison of the selected identifiers to an answer to the challenge when the selected identifiers match the specific identifiers to the presented images.

13. The computer-readable storage device of claim 12 wherein the images are three dimensional models.

14. The computer-readable storage device of claim 12 wherein the images are randomly textured, three dimensional models.

15. The computer-readable storage device of claim 12 wherein the images are three dimensional models with each of the three dimensional models set against a separate, randomly generated background.

16. The computer-readable storage device of claim 12 wherein the images are randomly rotated, three dimensional models.

17. The computer-readable storage device of claim 12 wherein the images are randomly colored, three dimensional models.

18. The computer-readable storage device of claim 12 wherein the images are randomly textured, three dimensional models with each of the three dimensional models set against a separate, randomly generated background.

19. The computer-readable storage device of claim 12 wherein providing access to the computing service comprises unlocking a mobile computing device.

20. The computer-readable storage device of claim 12 wherein providing access to the computing service comprises serving to the user a web page.

21. A computer-implemented access control system, comprising: one or more servers that are arranged and configured to: present to a user multiple images, multiple identifiers and a challenge to select one of the identifiers for each of the images, wherein: the images represent objects that are identifiable by a specific identifier, and more identifiers than images are presented in a single access attempt with the presented identifiers including the specific identifiers that identify the presented images and non-specific identifiers that do not identify the presented images; receive the selected identifiers from the user from among the presented identifiers; and provide access to a computing service based on a comparison of the selected identifiers to an answer to the challenge when the selected identifiers match the specific identifiers to the presented images.

22. The system of claim 21 wherein the images are randomly textured, three dimensional models with each of the three dimensional models set against a separate, randomly generated background.

23. The system of claim 21 wherein the servers are arranged and configured to provide access to the computing service by serving to the user a web page.

24. A computer-implemented method, comprising: presenting to a user multiple images, multiple identifiers and a challenge to select one of the identifiers for each of the images, wherein: the images represent objects that are identifiable by a specific identifier, and more identifiers than images are presented in a single access attempt with the presented identifiers including the specific identifiers that identify the presented images and non-specific identifiers that do not identify the presented images; receiving the selected identifiers from the user from among the presented identifiers; and providing access to an electronic device based on a comparison of the selected identifiers to an answer to the challenge when the selected identifiers match the specific identifiers to the presented images.

25. The computer-implemented method as in claim 24 wherein the images are randomly textured, three dimensional models with each of the three dimensional models set against a separate, randomly generated background.

26. The computer-implemented method as in claim 24 wherein providing access to the electronic device comprises unlocking a music device.

27. The computer-implemented method as in claim 24 wherein providing access to the electronic device comprises unlocking a game device.


Description




TECHNICAL FIELD

This document relates to systems and techniques for providing access to computing resources based on user responses to images.

BACKGROUND

Computer security is becoming an ever more important feature of computing systems. As users take their computers with them in the form of laptops, palmtops, and smart phones, it becomes desirable to lock such mobile computers from access by third parties. Also, as more computing resources on servers are made available over the Internet, and thus theoretically available to anyone, it becomes more important to ensure that only legitimate users, and not hackers or other fraudsters, are using the resources.

Computer security is commonly provided by requiring a user to submit credentials in the form of a password or pass code. For example, a mobile device may lock after a set number of minutes of inactivity, and may require a user to type a password that is known only to them in order to gain access to the services on the device (or may provide access to limited services without a password). In a similar manner, a web site may require a user to enter a password before being granted access. Also, certain web sites may require potential users to enter a term that is displayed to the users in an obscured manner so that automated machines cannot access the web sites for proper or improper purposes (e.g., to overload the web site servers). Such techniques may be commonly referenced as CAPTCHA's (Completely Automated Public Turing test to tell Computers and Humans Apart).

SUMMARY

This document describes systems and techniques that may be used to limit access to computing services, which, throughout this document, includes computing devices, electronic devices (e.g., music devices, game devices, etc.) and computing services (e.g., online computing services, web pages, etc.). In general, multiple images are shown to a user along with multiple identifiers, and a challenge may require the user to select the appropriate identifier for each of the images to gain access. For example, the images may be objects and the identifiers may be names of objects. More identifiers than images may be shown to the user such that the user has more identifiers to select from to associate with each of the images. If the user selects the appropriate identifier for each of the images, then access is granted. Such an example could be used in a CAPTCHA system to block access by automated computing systems, but permit access by human users.

In one exemplary implementation, the images may be three dimensional models. Also, the three dimensional (3D) models may be generated on the fly as requests for access are received. Many different variations of the same images may be presented to the user. For example, if the images presented are 3D models, the 3D models may be colored, textured, rotated and/or set against various backgrounds to achieve many different variations of the same 3D models. In this manner, a small corpus of labeled 3D models may be used. Although a small corpus of labeled 3D models may be used, the number of potential variations is great and does not have to rely on an enormous corpus of labeled data to provide the necessary variation against attackers, who might attempt to label a corpus of stock photos or images.

Multiple images also may be displayed to increase the level of security (because it is much harder to label three or four or six images by guessing than it is to label one). Also, the images may be pre-screened so that only images that are very difficult for a computing system to automatically label with an identifier are selected.

In certain implementations, such systems and techniques may provide one or more advantages. For example, using multiple images such as 3D models that can be colored, textured, rotated and/or set against various backgrounds along with more identifiers to select from than images can provide for a number of different inputs so as to provide relatively high security. The systems and techniques may be presented to a user on devices that use a touch screen such that the user can make identifier selections without using a keyboard or mouse. It also permits the user to enter a pass code with the use of a keyboard. Such an approach may be particularly useful for touch screen devices such as mobile smart phones, where a keyboard is hidden during normal use of the device. Also, image-based access may provide a more pleasing interface for users of computing devices, so that the users are more likely to use or remember a device or service.

According to one general aspect, a computer-implemented method may include presenting to a user multiple images, multiple identifiers and a challenge to select one of the identifiers for each of the images, receiving the selected identifiers from the user and providing access to a computing service based on a comparison of the selected identifiers to an answer to the challenge.

Implementations may include one or more of the following features. For example, the images may be three dimensional models. The images may be randomly textured, three dimensional models. The images may be three dimensional models with each of the three dimensional models set against a separate, randomly generated background. The images may be randomly rotated, three dimensional models. The images may be randomly colored, three dimensional models. The images may be randomly textured, three dimensional models with each of the three dimensional models set against a separate, randomly generated background.

In one exemplary implementation, at least two times more of the identifiers are presented than the images. In another exemplary implementation, at least three times more of the identifiers are presented than the images.

Providing access to the computing service may include unlocking a mobile computing device and/or may include serving to the user a web page.

In another general aspect, a recordable storage medium may include recorded and stored instructions that, when executed, perform the actions of presenting to a user multiple images, multiple identifiers and a challenge to select one of the identifiers for each of the images, receiving the selected identifiers from the user and providing access to a computing service based on a comparison of the selected identifiers to an answer to the challenge.

Implementations may include one or more of the following features. For example, the images may be three dimensional models. The images may be randomly textured, three dimensional models. The images may be three dimensional models with each of the three dimensional models set against a separate, randomly generated background. The images may be randomly rotated, three dimensional models. The images may be randomly colored, three dimensional models. The images may be randomly textured, three dimensional models with each of the three dimensional models set against a separate, randomly generated background. Providing access to the computing service may include unlocking a mobile computing device and/or serving to the user a web page.

In another general aspect, a computer-implemented access control system may include one or more servers that are arranged and configured to present to a user multiple images, multiple identifiers and a challenge to select one of the identifiers for each of the images, receive the selected identifiers from the user and provide access to a computing service based on a comparison of the selected identifiers to an answer to the challenge.

Implementations may include one or more of the following features. For example, the images may be randomly textured, three dimensional models with each of the three dimensional models set against a separate, randomly generated background. The servers may be arranged and configured to provide access to the computing service including serving to the user a web page.

In another general aspect, a computer-implemented method may include presenting to a user multiple images, multiple identifiers and a challenge to select one of the identifiers for each of the images, receiving the selected identifiers from the user and providing access to an electronic device based on a comparison of the selected identifiers to an answer to the challenge.

Implementations may include one or more of the following features. For example, the images may be randomly textured, three dimensional models with each of the three dimensional models set against a separate, randomly generated background. Providing access to the electronic device may include unlocking a music device and/or unlocking a game device.

The details of one or more implementations are set forth in the accompanying drawings and the description below. Other features will be apparent from the description and drawings, and from the claims.

BRIEF DESCRIPTION OF THE DRAWINGS

FIGS. 1A-1D show example screen shots of a challenge presented to a user to gain access.

FIG. 2 is an exemplary block diagram of an illustrative mobile system for limiting access using images and identifier inputs from users.

FIG. 3 is a flowchart of an example process for limiting access to a device or service.

FIG. 4 is a swim lane diagram of an example process for granting user access to an online service.

FIG. 5 shows an example of a computer device and a mobile computer device that can be used to implement the techniques described here.

Like reference symbols in the various drawings indicate like elements.

DETAILED DESCRIPTION

This document describes systems and techniques for mediating access to computing services, which throughout this document includes mediating access to computing devices, electronic devices (e.g., music devices, game devices, etc.) and mediating access to computing services (e.g., online computing services including websites and web pages). Such techniques may include displaying one or more images and multiple identifiers. The user may then be challenged and/or prompted to select one of the presented identifiers for each of the images. If the user properly selects the correct identifier for each of the images, the user may be provided access to a device or service.

FIGS. 1A-1D show an example screen shot 100, which may be presented to a user. The screen shot 100 may be presented in response to the user seeking access to a device or to a service. For example, the user may navigate to a website using a browser, where the screen shot 100 is presented to the user before the user can enter the website. The screen shot 100 also may be presented to a user seeking to unlock a device such as after a period of inactivity or after the device goes from a sleep mode to an active mode.

The screen shot 100 includes a challenge to the user that the user is required to answer correctly in order to gain access. In the figures, screen shot 100 includes multiple images 102a-102c, multiple identifiers 104 and a submit button 106. The images 102a-102c may be randomly generated and presented to the user in the screen shot 100. To gain access, the user is challenged to select the appropriate identifier from the list of identifiers 104 for each of the images 102a-102c and to submit the selections using the submit button 106. For example, instructions may be provided to the user telling the user that access may be granted by correctly labelling each of the images 102a-102c with one of the provided identifiers 104. If the user selects the correct identifier for each of the images 102a-102c, then access is granted. If the user does not select the correct identifier for each of the images 102a-102c, then access is denied.

In FIG. 1A, the screen shot 100 is provided to the user including a challenge to label each of the images 102a-102c with the correct identifier from the provided identifiers 104. Each of the images 102a-102c is displayed as being "unanswered" meaning that an identifier has not been selected for any of the images 102a-102c. The user may select an identifier for an image in different ways. For instance, the user may select one of the images such as image 102a and then select an identifier from the provided list of identifiers 104. The selected identifier may be displayed with the image in place of "unanswered." The user may change a selected identifier for an image simply by selecting another identifier while the image is highlighted. As the user selects an image, the instructions provided to the user may change. In FIG. 1A, if the user selects image 102a, the instructions in the screen shot 100 state "Please identify image 1." As, the user selects the other images 102b and 102c, the instructions may change accordingly.

FIG. 1B illustrates the screen shot 100 where the user has selected image 102a and selected the identifier "Boat" from the list of identifiers 104 for the image 102a. The identifier is now displayed below the image 102a. The images 102a-102c and the identifiers 104 may be selected using a touch screen, a mouse, a keyboard and/or other types of methods to select objects displayed on a screen. Although the identifiers 104 are illustrated as a list next to the images 102a-102c, this illustrates merely one exemplary implementation. Other implementations may be used to present the identifiers 104 to the user. For instance, the identifiers 104 may be presented to the user in a drop down menu. Also, the identifiers may be presented below each of the images 102a-102c in a drop down menu or other presentation mechanism including, for example, in a pop-up window.

In FIG. 1B, the remaining two images 102b and 102c are "unanswered." When the user highlights or otherwise selects image 102b, the instructions in the screen shot 100 may change to state "Please identify image 2." FIG. 1C illustrates the screen shot 100 where the user has selected the image 102b and selected the identifier "Animal" from the list of identifiers 104 for the image 102b. The identifier is now displayed below the image 102b. Although, the selected identifier is displayed below the image in this example, the selected identifier for an image may be indicated in other exemplary manners. The remaining image 102c is "unanswered." When the user highlights or otherwise selects image 102c, the instruction in the screen shot may change to state "Please identify image 3." The instructions as presented to the user in this example are merely exemplary and other forms or manners of presenting instructions to the user may be implemented.

FIG. 1D illustrates the screen shot 100 where the user has selected the image 102c and selected the identifier "Teapot" from the list of identifiers 104 for the image 102c. The selected identifier is now displayed below the image 102c. When the user has selected an identifier for each of the images 102a-102c, the instructions may tell the user to "Please submit" in order to have the selected identifiers submitted for a comparison against the correct identifiers.

In one exemplary implementation, the submit button 106 may be grayed-out or not selectable until the user has selected an identifier for each of the images 102a-102c. In other exemplary implementations, the submit button 106 may be selectable at any time. The selection of the submit button 106 by the user may cause the selected identifiers to be submitted for a comparison against the correct identifiers. For example, if the screen shot 100 is presented to a user attempting to unlock a device, then selection of the submit button 106 may cause the selected identifiers to be compared against the correct identifiers for this particular challenge, where the comparison of the selected identifiers against the correct identifiers may be performed by a module in the device. If the comparison is a match, then the device is unlocked. If the comparison is not a match, the device is not unlocked. The user may be given one or more additional chances to unlock the device either with the same challenge or with a different randomly generated challenge. After a configurable number of unsuccessful attempts, the device may be locked on a more permanent basis. The use of such a system may be used to enable humans to access the device, but to prevent automated computer systems from accessing the device, especially devices that are capable of communicating with wired and/or wireless networks. The use of such a system also may be used to prevent accidental activation or use of the device when such use of the device is not intended by the user, such as when the device is in the user's pocket or other device holder.

Similarly, if the screen shot 100 is presented to a user attempting to access an online service such as, for example, attempting to access a website, then selection of the submit button 106 may cause the selected identifiers to be communicated to an access server. The comparison of the selected identifiers to the correct identifiers may be performed by the access server. If the comparison is a match, then access is granted to the website. If the comparison is not a match, then access is denied. The use of such a system may be used to enable humans to access the website, but to prevent automated computer systems from accessing the website because the automated systems may not be able to recognize the images and to select to correct identifier for each of the images.

In these example figures, the user is presented with more identifiers to select from than there are images presented. In one exemplary implementation, the user may be presented with at least twice as many identifiers to select from than there are images presented. In another exemplary implementation, the user may be presented with at least as three times as many identifiers to select from than there are images presented. The more identifiers that are presented in relation to the number of images, the lower the probability that a human or an automated computing system would randomly guess the correct identifier for each of the images.

In one exemplary implementation, the images presented to the user may be computer-generated three dimensional (3D) models. For example, the images 102a-102c may be computer-generated 3D models of different objects, namely, a boat, an animal and a teapot. The use of 3D models may make it more difficult for automated computing systems to determine the identity of the image. Additionally, the same 3D models may be presented to the user with many different variations to the 3D model. For instance, the 3D model may be stylistically rendered and presented to include different colors, textures, and/or shading styles. The 3D models also may be randomly rotated such that they can be presented in various different orientations. The 3D models also may be presented against various different backgrounds. For example, each of the images 102a-102c may be presented against a different background.

The different variations may be applied to a 3D model individually or collectively in different combinations. For instance, the image 102b of the giraffe may be rotated and the giraffe object may be textured in something other than giraffe spots such as, for example, fur or bumps or any of many other types of textures. When these techniques are used to unlock a device, the device may randomly generate the 3D models with the different potential variations for presentation to the user. When these techniques are used to access a computing service, a server or other computing device that is remote from the user may randomly generate the 3D models with the different potential variations for presentation to the user.

In the above example, having the user select the correct identifier for each of the images to unlock the device may prevent the user from accidentally hitting buttons (e.g., when the device is in the user's pocket). Also, this makes it more difficult for remote hackers, especially automated machines, to access the device using guesses and other brute force-type techniques.

In one exemplary implementation, the images 102a-102c may be presented as a single composite image with the images 102a-102c being objects within the single composite image instead of the images 102a-102c being presented as multiple independent images. For example, the images 102a-102c may be presented left-to-right as objects within the single composite image. In another example, the images 102a-102c may be presented top-to-bottom as objects within the single composite image. The user may be challenged to select the proper identifier from the provided identifiers for each of the objects within the single composite image in the different manners described above.

The above techniques also may be used in combination with other security techniques such as, for example, passwords and/or biometrics to provide additional security to gain access.

FIG. 2 is an exemplary block diagram of an illustrative mobile system 200 for limiting device access using images and identifier inputs from users. The system includes, in the main, a mobile computing device 202, such as, for example, a smart phone or personal digital assistant (PDA), to which access can be granted, or that may mediate access to assets from remote servers or other computers, such as access to Internet web sites access to features and services on Internet web sites.

The device 202 can interact graphically using a graphical user interface (GUI) on a display 204 that may show representations of various images to a user and that may receive input from the user. In one example, the display 204 is a touch screen display, so that a user may directly press upon images to manipulate them on the display 204 and to select the correct identifier for each of the images from the provided identifiers. Input to the device may also be provided using a trackball 206 and a keyboard 207 on the device 202. The keyboard 207 may be a hard keyboard with physical keys, a soft keyboard that is essentially a touch screen keyboard, or a combination of both.

A display manager 208 is provided to supervise and coordinate information to be shown on the display 204. The display manager 208, for example, may be provided with data relating to information to be displayed and may coordinate data received from various different applications or modules. As one example, display manager 208 may receive data for overlapping windows on a windowed display and may determine which window is to be on top and where the lower window or windows is to be cut.

Device inputs such as presses on the touch screen 204 may be processed by an input manager 212. For example, the input manager 212 may receive information regarding input provided by a user on touch screen 204, and may forward such information to various applications or modules. For example, the input manager 212 may cooperate with the display manager 208 so as to understand what onscreen elements a user is selecting when they press on the touch screen 204.

The device 202 may include a processor 216 that executes instructions stored in memory 217, including instructions provided by a variety of applications 214 stored on the device 202. The processor 216 may comprise multiple processors responsible for coordinating interactions among other device components and communications over an I/O interface 219. The processor 216 also may be responsible for managing internal alerts generated by the device 202. For example, the processor 216 may be alerted by the input manager 212 (which may operate on the processor) when a user touches the display 204 so as to take the device 202 out of a sleep mode state. Such an input may cause the processor 216 to present images and identifiers to the user for the user to select and submit the correct identifier for each of the images in order to provide access to the device 202 or various services, as explained above and below. In one exemplary implementation, the input may cause the processor 216 to generate the images as 3D models for presentation to the user along with multiple identifiers. Also, the processor 216 may generate the variations such as, for example, color, shading, textures, different backgrounds and/or rotations, and randomly apply the variations to the 3D models or non-3D images for presentation to the user on the display 204.

The processor 216 may perform such functions in cooperation with a device access manager 210. The device access manager 210 may execute code to gather images from the access images memory 222, to gather the identifiers, and to present the images and identifiers to a user of the device 202. The device access manager 210 may display the images in a manner that permits user manipulation of the images and the identifiers, may test user selected identifiers, and may provide an indication that access should be granted or denied. The device access manager 210 also may execute code to apply randomly the different variations to the images such as, for example, color, shading, textures, backgrounds and/or rotations for presentation to the user on the display 204. In one exemplary implementation, the device access manager 210 may execute code to use a lapped textures technique to select a texture sample and apply it to a 3D model such that the 3D model is textured and the textured 3D model is presented to the user.

The device also includes memory 220, 222 storing various data. The memory 220, 222 may comprise random access memory where computer instructions and data are stored in a volatile memory device for execution by the processor 216. The memory 220, 222 may also include read-only memory where invariant low-level systems code or data for basic system functions such as basic input and output, and startup instructions reside. In addition, the memory 220, 222 may include other suitable types of memory such as programmable read-only memory, erasable programmable read-only memory, electrically erasable programmable read-only memory, hard disks, and removable memory such as microSD cards or Flash memory.

The memory 220, 222 may, in one example, include user data memory 220, which may story various parameters describing preferences for a user of the device 202. The user data memory 220 may, for example, store and provide ordinary user pass codes, user identifying information (e.g., name, address, telephone numbers, and e-mail addresses), and other such information. Separately or together, access images memory 222 may store images and identifiers used to access the device 202 or various web pages. The access images memory also may store information needed to generate the different variations to be applied to the images, such as the 3D models. In one exemplary implementation, the access images memory 222 may store multiple individual images from which the device access manager 210 may select for presentation on the display 204. In another exemplary implementation, the access images memory 222 may store multiple single composite images from which the device access manager 210 may select for presentation on the display 204. The single composite images may include multiple images that are objects within the single composite image, where the objects may be arranged in various different manners (e.g., right-to-left, top-to-bottom, etc.).

The device 202 may communicate with other devices or a network through a wireless interface 218. The wireless interface 218 may provide for communication by the device 202 with messaging services such as text messaging, e-mail, and telephone voice mail messaging. In addition, the wireless interface 218 may support downloads and uploads of content and computer code over a wireless network. The wireless interface 218 may additionally provide for voice communications in a wireless network in a familiar manner. As one example, the wireless interface 218 may be used to interact with internet web pages that are to be displayed on display 204, and to submit orientation information to a server or servers remote from the device 202.

FIG. 3 is a flowchart of an example process 300 for limiting access to a device or a computing service. In general, the process 300 involves presenting images and identifiers to user and determining whether the user can select the correct identifier for each of the images from the provided identifiers, and to thus conclude that the user is a human who should be granted access to the device or service.

Process 300 may include presenting to a user multiple images, multiple identifiers and a challenge to select one of the identifiers for each of the images (302). For example, as discussed above in FIGS. 1A-1D, images 102a-102c and identifiers 104 may be presented to the user. The challenge may be implicit in that the images are initially presented as being "unanswered" as illustrated in FIG. 1A. The challenge also may be explicit in that, for example, instructions are presented to the user to identify each of the images and to submit the identifiers. For example, FIGS. 1A-1D illustrate exemplary instructions that may be provided to the user in the screen shot 100.

As discussed above, the images presented to the user may include 3D models that may be generated in response to a request for access. In one exemplary implementation, to provide access to a computing service, a server on a network may generate the 3D models for presentation to the user. In another exemplary implementation, to provide access to a device or to a service, a module on the device (e.g., device access manager 210 of FIG. 2) may generate the 3D models for presentation to the user.

The images presented to the user may include many variations on the same images. For example, if the images are 3D models, the same 3D models may be randomly colored, shaded, textured, rotated and/or set against different random backgrounds so as to make it more difficult for a non-human to determine the proper identifier for the image. Also, by using different variations of the same 3D model, a smaller corpus of 3D models may be used and yet still achieve many, many different variations.

Process 300 also includes receiving the selected identifiers from the user (304). For example, the selected identifiers may be communicated to a module within a device or the selected identifiers may be communicated to a server on a network. The selected identifiers are received and a comparison is made to determine if the selected identifiers match an answer to the challenge (306). The answer to the challenge may be the correct identifiers for each of the presented images. If the selected identifiers do not match the answer, then access is denied (308). If the selected identifiers match the answer, then access is provided (310).

FIG. 4 is a swim lane diagram of an example process 400 for granting user access to a web page and/or to an online service. A client may request access to a web page and/or to an online service (401). A request for access by a client may be received at an access server (402). The access server may request and retrieve multiple images and identifiers from an image repository (404). For example, the images (e.g., 3D models) may be stored on a storage medium as part of an image repository. The images may be stored along with metadata, which may further describe or include additional information regarding the image. The respective identifiers may be stored along with the images and/or the identifiers may be a part of the metadata about each image.

In one exemplary implementation, the image repository may store multiple individual images from which access server may select for presentation to the client. In another exemplary implementation, the image repository may store multiple single composite images from which the access server may select for presentation to the client. The single composite images may include multiple images that are objects within the single composite image, where the objects may be arranged in various different manners (e.g., right-to-left, top-to-bottom, etc.).

The access server may be configured to generate and to apply one or more variations to the retrieved images (406). For example, if the images are 3D models, the access server may randomly apply a color to one or more of the images. Also, the access server may randomly apply a texture to one or more of the images. In one exemplary implementation, the access server may use a lapped texture technique to apply a texture to the 3D model. Also, the access server may set the images against different backgrounds, shade the images and/or rotate the images in different orientations. Although, the variations may be applied to each of the images, the identifier for the image remains the same. For example, although a 3D model of a giraffe may be colored red and textured with fur, the identifier for the 3D model is still "giraffe." A human being viewing the colored and textured giraffe will be able to perceive that the 3D model is a giraffe and that the correct identifier is a giraffe; however, an automated computing system may have a difficult time determining that the 3D model is a giraffe, especially if the automated computing system is using standard giraffe characteristics to make this guess.

The access server may be configured to present the images and the identifiers along with a challenge to the client that requested access (408). The client may receive and display the images and the identifiers (410). The client may receive selected identifiers from a user for each of the images (412) and may submit the selected identifiers to the access server (414).

The access server may receive the selected identifiers from the client (416) and may compare the selected identifiers to the corrected identifiers for the images that were presented to the client (418). The access server may maintain a table in memory of the answer to the challenge that was presented to the user. For instance, the access server may maintain a table that tracks the images and/or identifiers that were served to a particular client such that when the selected identifiers are received, the selected identifiers may be compared against the identifiers in the table.

If the selected identifiers match, then the access server may grant access and redirect the client's browser to the appropriate web page in the website or to the appropriate online service, as the case may be (420). The web page(s) corresponding to the secure portion of the website may be displayed on the client browser (422).

FIG. 5 shows an example of a generic computer device 500 and a generic mobile computer device 550, which may be used with the techniques described here. Computing device 500 is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Computing device 550 is intended to represent various forms of mobile devices, such as personal digital assistants, cellular telephones, smart phones, and other similar computing devices. The components shown here, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed in this document.

Computing device 500 includes a processor 502, memory 504, a storage device 506, a high-speed interface 508 connecting to memory 504 and high-speed expansion ports 510, and a low speed interface 512 connecting to low speed bus 514 and storage device 506. Each of the components 502, 504, 506, 508, 510, and 512, are interconnected using various busses, and may be mounted on a common motherboard or in other manners as appropriate. The processor 502 can process instructions for execution within the computing device 500, including instructions stored in the memory 504 or on the storage device 506 to display graphical information for a GUI on an external input/output device, such as display 516 coupled to high speed interface 508. In other implementations, multiple processors and/or multiple buses may be used, as appropriate, along with multiple memories and types of memory. Also, multiple computing devices 500 may be connected, with each device providing portions of the necessary operations (e.g., as a server bank, a group of blade servers, or a multi-processor system).

The memory 504 stores information within the computing device 500. In one implementation, the memory 504 is a volatile memory unit or units. In another implementation, the memory 504 is a non-volatile memory unit or units. The memory 504 may also be another form of computer-readable medium, such as a magnetic or optical disk.

The storage device 506 is capable of providing mass storage for the computing device 500. In one implementation, the storage device 506 may be or contain a computer-readable medium, such as a floppy disk device, a hard disk device, an optical disk device, or a tape device, a flash memory or other similar solid state memory device, or an array of devices, including devices in a storage area network or other configurations. A computer program product can be tangibly embodied in an information carrier. The computer program product may also contain instructions that, when executed, perform one or more methods, such as those described above. The information carrier is a computer- or machine-readable medium, such as the memory 504, the storage device 506, or memory on processor 502.

The high speed controller 508 manages bandwidth-intensive operations for the computing device 500, while the low speed controller 512 manages lower bandwidth-intensive operations. Such allocation of functions is exemplary only. In one implementation, the high-speed controller 508 is coupled to memory 504, display 516 (e.g., through a graphics processor or accelerator), and to high-speed expansion ports 510, which may accept various expansion cards (not shown). In the implementation, low-speed controller 512 is coupled to storage device 506 and low-speed expansion port 514. The low-speed expansion port, which may include various communication ports (e.g., USB, Bluetooth, Ethernet, wireless Ethernet) may be coupled to one or more input/output devices, such as a keyboard, a pointing device, a scanner, or a networking device such as a switch or router, e.g., through a network adapter.

The computing device 500 may be implemented in a number of different forms, as shown in the figure. For example, it may be implemented as a standard server 520, or multiple times in a group of such servers. It may also be implemented as part of a rack server system 524. In addition, it may be implemented in a personal computer such as a laptop computer 522. Alternatively, components from computing device 500 may be combined with other components in a mobile device (not shown), such as device 550. Each of such devices may contain one or more of computing device 500, 550, and an entire system may be made up of multiple computing devices 500, 550 communicating with each other.

Computing device 550 includes a processor 552, memory 564, an input/output device such as a display 554, a communication interface 566, and a transceiver 568, among other components. The device 550 may also be provided with a storage device, such as a microdrive or other device, to provide additional storage. Each of the components 550, 552, 564, 554, 566, and 568, are interconnected using various buses, and several of the components may be mounted on a common motherboard or in other manners as appropriate.

The processor 552 can execute instructions within the computing device 550, including instructions stored in the memory 564. The processor may be implemented as a chipset of chips that include separate and multiple analog and digital processors. The processor may provide, for example, for coordination of the other components of the device 550, such as control of user interfaces, applications run by device 550, and wireless communication by device 550.

Processor 552 may communicate with a user through control interface 558 and display interface 556 coupled to a display 554. The display 554 may be, for example, a TFT LCD (Thin-Film-Transistor Liquid Crystal Display) or an OLED (Organic Light Emitting Diode) display, or other appropriate display technology. The display interface 556 may comprise appropriate circuitry for driving the display 554 to present graphical and other information to a user. The control interface 558 may receive commands from a user and convert them for submission to the processor 552. In addition, an external interface 562 may be provide in communication with processor 552, so as to enable near area communication of device 550 with other devices. External interface 562 may provide, for example, for wired communication in some implementations, or for wireless communication in other implementations, and multiple interfaces may also be used.

The memory 564 stores information within the computing device 550. The memory 564 can be implemented as one or more of a computer-readable medium or media, a volatile memory unit or units, or a non-volatile memory unit or units. Expansion memory 574 may also be provided and connected to device 550 through expansion interface 572, which may include, for example, a SIMM (Single In Line Memory Module) card interface. Such expansion memory 574 may provide extra storage space for device 550, or may also store applications or other information for device 550. Specifically, expansion memory 574 may include instructions to carry out or supplement the processes described above, and may include secure information also. Thus, for example, expansion memory 574 may be provide as a security module for device 550, and may be programmed with instructions that permit secure use of device 550. In addition, secure applications may be provided via the SIMM cards, along with additional information, such as placing identifying information on the SIMM card in a non-hackable manner.

The memory may include, for example, flash memory and/or NVRAM memory, as discussed below. In one implementation, a computer program product is tangibly embodied in an information carrier. The computer program product contains instructions that, when executed, perform one or more methods, such as those described above. The information carrier is a computer- or machine-readable medium, such as the memory 564, expansion memory 574, or memory on processor 552, that may be received, for example, over transceiver 568 or external interface 562.

Device 550 may communicate wirelessly through communication interface 566, which may include digital signal processing circuitry where necessary. Communication interface 566 may provide for communications under various modes or protocols, such as GSM voice calls, SMS, EMS, or MMS messaging, CDMA, TDMA, PDC, WCDMA, CDMA2000, or GPRS, among others. Such communication may occur, for example, through radio-frequency transceiver 568. In addition, short-range communication may occur, such as using a Bluetooth, WiFi, or other such transceiver (not shown). In addition, GPS (Global Positioning System) receiver module 570 may provide additional navigation- and location-related wireless data to device 550, which may be used as appropriate by applications running on device 550.

Device 550 may also communicate audibly using audio codec 560, which may receive spoken information from a user and convert it to usable digital information. Audio codec 560 may likewise generate audible sound for a user, such as through a speaker, e.g., in a handset of device 550. Such sound may include sound from voice telephone calls, may include recorded sound (e.g., voice messages, music files, etc.) and may also include sound generated by applications operating on device 550.

The computing device 550 may be implemented in a number of different forms, as shown in the figure. For example, it may be implemented as a cellular telephone 580. It may also be implemented as part of a smart phone 582, personal digital assistant, or other similar mobile device.

Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, specially designed ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various implementations can include implementation in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device.

These computer programs (also known as programs, software, software applications or code) include machine instructions for a programmable processor, and can be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the terms "machine-readable medium" "computer-readable medium" refers to any computer program product, apparatus and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.

To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to the user and a keyboard and a pointing device (e.g., a mouse or a trackball) by which the user can provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user can be received in any form, including acoustic, speech, or tactile input.

The systems and techniques described here can be implemented in a computing system that includes a back end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front end component (e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back end, middleware, or front end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), and the Internet.

The computing system can include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.

A number of embodiments have been described. Nevertheless, it will be understood that various modifications may be made without departing from the spirit and scope of the invention.

In addition, the logic flows depicted in the figures do not require the particular order shown, or sequential order, to achieve desirable results. In addition, other steps may be provided, or steps may be eliminated, from the described flows, and other components may be added to, or removed from, the described systems. Accordingly, other embodiments are within the scope of the following claims.


* * * * *

Sunday, June 3, 2012

Real-time bookmarking of streaming media assets




United States Patent8,191,103
Hofrichter ,   et al.May 29, 2012

Real-time bookmarking of streaming media assets 

Abstract
A method for real-time bookmarking of streaming media assets is disclosed. In one embodiment, the method includes dynamically changing a presentation segment of a plurality of segments based one or more bookmark signals from a viewer.

Inventors:Hofrichter; Klaus (Santa Clara, CA), Rafey; Richter A. (Santa Clara, CA)
Assignee:Sony Corporation (Tokyo, JP)
Sony Electronics Inc. (Park Ridge, NJ) 
Appl. No.:11/031,842
Filed:January 6, 2005

Related U.S. Patent Documents

Application NumberFiling DatePatent NumberIssue Date
09651433Aug., 2000

Current U.S. Class:725/142 ; 725/131; 725/134; 725/139
Current International Class:H04N 7/16 (20110101)
Field of Search:725/87,142,139,134


References Cited [Referenced By]

U.S. Patent Documents
4745549May 1988Hashimoto
4775935October 1988Yourick
4965825October 1990Harvey et al.
5223924June 1993Strubbe
5231494July 1993Wachob
5353121October 1994Young et al.
5371551December 1994Logan et al.
5481296January 1996Cragun et al.
5534911July 1996Levitan
5553281September 1996Brown et al.
5614940March 1997Cobbley et al.
5619249April 1997Billock et al.
5625464April 1997Compoint et al.
5635979June 1997Kostreski et al.
5638443June 1997Stefik et al.
5699107December 1997Lawler et al.
5740549April 1998Reilly et al.
5758257May 1998Herz et al.
5758259May 1998Lawler
5797010August 1998Brown
5826102October 1998Escobar et al.
5852435December 1998Vigneaux et al.
5861906January 1999Dunn et al.
5884056March 1999Steele
5900905May 1999Shoff et al.
6029045February 2000Picco et al.
6064380May 2000Swenson et al.
6084581July 2000Hunt
6144375November 2000Jain et al.
6160570December 2000Sitnik
6236395May 2001Sezan et al.
6243725June 2001Hempleman et al.
6269369July 2001Robertson
6289346September 2001Milewski et al.
6366296April 2002Boreczky et al.
6377861April 2002York
6460036October 2002Herz
6463444October 2002Jain et al.
6483986November 2002Krapf
6574378June 2003Lim
6848002January 2005Detlef
2002/0023230February 2002Bolnick et al.
2002/0170068November 2002Rafey et al.
2002/0194260December 2002Headley et al.
2003/0174861September 2003Levy et al.
2006/0212900September 2006Ismail et al.

Other References

"Automatic Construction of Personalized TV News Programs," Association of Computing Machinery (ACM) Multimedia Conf., 323-331 (Presented Nov. 3, 1999). cited by examiner .
Electronic House Com, EchoStart Communications Corporation and Geocast Network Systems Align to Deliver New Personalized Interactive Broadband Services to PC Users Via Satellite, Jun. 4, 2002, http://209.6.10.99/news101600echostar.html, 3 pages. cited by other .
Lost Remote, The TV Revolution is Coming, Lost Remote TV New Media & Television Convergence News, TV News Gets (too?) Personal by Cory Bergman, Sep. 25, 2000, http://www.lostremote.com/producer/personal.html, 2 pages. cited by other.

Primary Examiner: Bui; Kieu Oanh T 
Assistant Examiner: Alcon; Fernando 
Attorney, Agent or Firm: Blakely, Sokoloff, Taylor & Zafman LLP

Parent Case Text



RELATED APPLICATION

This application is a continuation application of Ser. No. 09/651,433, filed Aug. 30, 2000 now abandoned.
Claims



What is claimed is:

1. A computerized method comprising: receiving, by an on-site media system, a plurality of teasers and a plurality of different media segments from a content provider, the received media segments to be presented in a current presentation order, wherein each of the plurality of teasers is an audio/video teaser; sequentially presenting a video component of each of the plurality of teasers on a local display, wherein the sequential presentation is a temporal sequential presentation; receiving a bookmark signal associated with a presented teaser during the sequential presentation of the video component of the presented teaser, wherein a media segment associated with the presented teaser is marked in response to receiving the bookmark signal; dynamically changing, in response to receiving the bookmark signal, a presentation position of the marked media segment in the current presentation order; and presenting the plurality of different media segments in the changed presentation order, wherein the marked media segment is presented before an unmarked and different media segment is presented and the plurality of media segments is presented subsequent to the presentation of the plurality of teasers.

2. The method of claim 1, wherein the bookmark signal marks a media segment as of interest.

3. The method of claim 1, wherein the bookmark signal marks a media segment as not of interest.

4. The method of claim 3, wherein the changed presentation order comprises not presenting the marked media segment.

5. The method of claim 1, wherein receiving the plurality of teasers comprises using a disk/tuner cartridge.

6. The method of claim 1, wherein receiving the plurality of media segments comprises using a disk/tuner cartridge.

7. The method of claim 1, wherein the teaser is associated with multiple media segments.

8. The method of claim 1, wherein multiple teasers are associated with multiple media segments.

9. A non-transitory machine readable medium having executable instructions to cause a processor to perform a method comprising: receiving, by an on-site media system, a plurality of teasers and a plurality of different media segments from a content provider, the received media segments to be presented in a current presentation order, wherein each of the plurality of teasers is an audio/video teaser; presenting a video component of each of the plurality of teasers on a local display, wherein the sequential presentation is a temporal sequential presentation; receiving a bookmark signal associated with a presented teaser during the sequential presentation of the video component of the presented teaser, wherein a media segment associated with the presented teaser is marked in response to receiving the bookmark signal; dynamically changing, in response to receiving the bookmark signal, a presentation position of the marked media segment in the current presentation order; and presenting the plurality of different media segments in the changed presentation order, wherein the marked media segment is presented before an unmarked and different media segment is presented and the plurality of media segments is presented subsequent to the presentation of the plurality of teasers.

10. The non-transitory machine readable medium of claim 9, wherein the bookmark signal marks a media segment as of interest.

11. The non-transitory machine readable medium of claim 9, wherein the bookmark signal marks a media segment as not of interest.

12. The non-transitory machine readable medium of claim 11, wherein the changed presentation order comprises not presenting the marked media segment.

13. The non-transitory machine readable medium of claim 9, wherein receiving the plurality of teasers comprises using a disk/tuner cartridge.

14. The non-transitory machine readable medium of claim 9, wherein receiving the plurality of media segments comprises using a disk/tuner cartridge.

15. The non-transitory machine readable medium of claim 9, wherein the teaser is associated with multiple media segments.

16. The non-transitory machine readable medium of claim 9, wherein multiple teasers are associated with multiple media segments.

17. A system comprising: a disk/tuner cartridge to receive, by an on-site media system, a plurality of teasers and a plurality of different media segments from a content provider, the received media segments to be presented in a current presentation order, wherein each of the plurality of teasers is an audio/video teaser; and a processor to sequentially present a video component of each of the plurality of teasers, wherein the sequential presentation is a temporal sequential presentation, wherein the disk-tuner cartridge receives a bookmark signal associated with a presented teaser during the sequential presentation of the video component of the presented teaser, marks a media segment associated with the presented teaser in response to receiving the bookmark signal and dynamically changes, in response to receiving the bookmark signal, a presentation position of the marked media segment in the current presentation order, the plurality of different media segments are presented by the processor in the changed presentation order, wherein the marked media segment is presented before an unmarked and different media segment is presented and the plurality of media segments is presented subsequent to the presentation of the plurality of teasers.

18. The system of claim 17, wherein the bookmark signal marks a media segment as of interest.

19. The system of claim 17, wherein the bookmark signal marks a media segment as not of interest.

20. The system of claim 19, wherein the changed presentation order comprises not presenting the marked media segment.

21. The system of claim 17, wherein the teaser is associated with multiple media segments.

22. The system of claim 17, wherein multiple teasers are associated with multiple media segments.

23. An apparatus comprising: means for receiving a plurality of teasers and a plurality of different media segments, the received media segments to be presented in a current presentation order, wherein each of the plurality of teasers is an audio/video teaser; means for sequentially presenting a video component of each of the plurality of teasers on a local display, wherein the sequential presentation is a temporal sequential presentation; means for receiving a bookmark signal associated with a presented teaser during the sequential presentation of the video component of the presented teaser, wherein a media segment associated with the presented teaser is marked in response to receiving the bookmark signal; means for dynamically changing, in response to receiving the bookmark signal, a presentation position of the marked media segment in the current presentation order; and means for presenting the plurality of different media segments in the changed presentation order, wherein the marked media segment is presented before an unmarked and different media segment is presented and the plurality of media segments is presented subsequent to the presentation of the plurality of teasers.

24. The apparatus of claim 23, wherein the bookmark signal marks a media segment as of interest.

25. The apparatus of claim 24, wherein the bookmark signal marks a media segment as not of interest.

26. The apparatus of claim 25, wherein the changed presentation order comprises not presenting the marked media segment.

27. The apparatus of claim 23, wherein the means for receiving the plurality of teasers comprises using a disk/tuner cartridge.

28. The apparatus of claim 23, wherein the means for receiving the plurality of media segments comprises using a disk/tuner cartridge.

29. The apparatus of claim 23, wherein the teaser is associated with multiple media segments.

30. The apparatus of claim 23, wherein multiple teasers are associated with multiple media segments.
Description



FIELD OF INVENTION

The invention is related to audio/video storage and multimedia presentation systems.

BACKGROUND OF THE INVENTION

A multimedia presentation system enables a viewer to select one or more segments to watch by displaying a series of teasers, or short clips, that describe the segments.

In some systems, the teasers are presented first, followed by the full stories. The user can interact with the presentation engine to influence the presentation sequence by either jumping to a specific story during the presentation of the respective teaser or by skipping a story to continue with the next story, or another continuation point.

The problem with this system is that this system only allows changing the "position-pointer" in an ongoing presentation. There is also no real indexing to the stories. The viewer is unable to setup a presentation sequence dynamically for passive viewing afterwards.

SUMMARY OF THE INVENTION

A method for real-time bookmarking of streaming media assets is disclosed. In one embodiment, the method includes dynamically changing a presentation sequence of a plurality of video segments based on one or more bookmark signals from a viewer.

BRIEF DESCRIPTION OF THE DRAWINGS

The present invention is illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar elements, and in which:

FIG. 1 shows an embodiment of a method for bookmarking.

FIG. 2 is a block diagram of an on-site media system having a dedicated service module.

FIG. 3A is a block diagram of data recorded on a dedicated service module.

FIG. 3B is a diagram of multiple designs of a dedicated service module.

FIG. 4 is a block diagram of another configuration of a dedicated service module.

FIG. 5 is a functional block diagram of an interactive media system including content provider and viewer systems with functions.

FIG. 6A is a diagram of a fine-grain media stream.

FIG. 6B is a television view generated using the interactive media system.

DETAILED DESCRIPTION

A method for real-time bookmarking of streaming media assets is disclosed. In one embodiment, the method includes dynamically changing the presentation order of a plurality of video segments based on one or more bookmark signals from a viewer.

An advantage of this method is that the viewer receives a full overview of the available segment material. It is not necessary for the viewer to revisit the teasers to access other segment content of interest. The viewer can easily and dynamically determine the presentation sequence for subsequent passive and customized viewing.

An apparatus, such as an interactive service module, can present television segments to a viewer on demand. The interactive service module can perform a method for real-time bookmarking of streaming media assets. The interactive service module may include a tuner to receive data for television segments, and a computer readable memory to store the segment data. Teasers associated with each segment may also be received by the tuner and stored in memory. Metadata may be used to identify each segment and its corresponding teaser. The metadata may also be received by the tuner and stored in memory. The metadata may be used to enable the viewer to control the presentation order of several segments that are displayed to the viewer. A presentation engine of the interactive service module may present the content based on viewer preferences.

For example, digital Audio/Video (AV) content material, e.g. video clips representing a television news segment, may be available to the interactive service module from random access storage, either locally or through a network. For each story, represented by one or more video clips, an additional teaser video clip is available from storage. Alternatively, a table of contents (TOC) can be retrieved from storage. A teaser clip introduces a single story and gives an impression about the topic of the story. Descriptive metadata may be used by the interactive service module to identify separate stories in the video material and to identify their corresponding teasers.

A dynamic navigation mechanism to perform real-time bookmarking may be executed by the interactive service module. The mechanism enables a viewer to send a signal to the presentation engine during the presentation of a teaser indicating that the corresponding story is of interest. The presentation of the teasers continues until all teasers have been presented, but the subsequent presentation structure of the corresponding stories is changed according to the viewer's bookmark signals. This results in a customized presentation of the bookmarked stories.

A method for bookmarking is shown in FIG. 1. For a plurality of segments, each segment is associated with a corresponding teaser, step 110. Each teaser is displayed to the viewer in a sequential order, step 120. During the presentation of a given teaser, the viewer is enabled to send a bookmark signal indicating that the corresponding segment or story, is of interest, step 130. If the viewer sends a bookmark signal, the corresponding segment is bookmarked as of interest to the viewer, step 140. The method determines whether all teasers have been presented to the viewer, step 150. If not, the next teaser in the sequential order is displayed and steps 120 through 150 are repeated. If all teasers have been presented, then the presentation order of the segments is dynamically changed based on the bookmark signals, step 160. For example, the programs that are bookmarked may be displayed before the programs that are not bookmarked. The segments are presented to the viewer in the dynamically changed presentation order, step 170.

Alternatively, instead of sending a bookmark signal to indicate that the story is of interest, a viewer can send a signal to indicate that the story is not of interest. The "not of interest" signal can be used to place the corresponding story at a later position in the presentation sequence, or to remove the story entirely from the presentation sequence. A neutral signal may also be sent to indicate that the viewer is neither interested nor uninterested in the corresponding program.

The method for bookmarking and dynamically changing of the presentation order is not limited to bookmarking during the teaser presentation. In one embodiment, the method for bookmarking may also be used during a presentation of a story to indicate that the current story is of interest, but should be presented later or with reduced priority. Thus, this enables the viewer to postpone the presentation of the current story, and changes the presentation order dynamically.

In one embodiment, a method to bookmark or postpone a story is not limited to a television news segment environment. The method can be applied to situations where a streaming media presentation order is dynamically changed based on viewer input, such as a table of contents of a video library, a music video, or an audio-only application, for example.

FIGS. 2 through 5 show embodiments of an interactive service module for real-time bookmarking of streaming media assets. Referring now to FIG. 2, a block diagram of an on-site media system having a dedicated service module is shown, in accordance with one embodiment of the present invention. To provide a context for the dedicated service module, on-site media system 200 shows one embodiment of a larger system in which the dedicated service module may be implemented to provide a dedicated on-site media service. On-site media system 200 includes a control/data bus 202 for communicating information, a central processor unit 204 for processing information and instructions, coupled to bus 202, and a memory unit 206 for storing information and instructions, coupled to bus 202. Memory unit 206 can include random access memory (RAM) 206a, for storing temporal information and instructions for central processor unit 204, and read only memory (ROM) 206b, for storing static information and instructions for central processor unit 204. System 200 also includes a display device 218 coupled to bus 202, for viewing data, and a signal source 212, coupled to dedicated service module 210 via line 213a for providing a signal.

On-site media system 200 also includes a dedicated service module 210, coupled to bus 202, to provide a media signal. Dedicated service module 210 can also be referred to as a dedicated media device or a dedicated service cartridge, depending on its specific configuration. Dedicated service module 210, enables the on-site media service to be implemented by providing dedicated tuning and guaranteed storage for a broadcast signal. The dedicated tuning provides a dedicated path from the broadcast stream into the guaranteed storage device. More specifically, dedicated service module 210 includes one or more dedicated tuners and one or more dedicated media storage devices, coupled to each other. More details of dedicated service module 210 are provided in subsequent figures. Dedicated service module 210 can allow for proprietary encoding of service information in datacast associated with broadcast streams with built-in support in the dedicated service module for processing the service information. The dedicated service module can also support software reconfiguration via broadcast at several different levels (e.g., device upgrade, software platform upgrade, and content upgrade).

Signal source 211 can be any device, such as an antennae for receiving a broadcast, a cable interface for line transmission, or a dish for receiving satellite broadcast. Display device 218 of FIG. 2 can be any type of display, including an analog or a digital television, or a personal computer (PC) display. While processor 204 and memory 206 are shown as individual entities, they may be incorporated into another component. For example, processor 204 and memory 206 may be new components or may be existing components in display device 218, e.g. a digital television (DTV), dedicated service module 210, or in a set-top box (not shown). Additionally, while dedicated service module 210 is shown individually, it may be integrated into other components, such as display device 218, as shown in configuration B of subsequent FIG. 3B.

System 200 also includes an optional Internet connection 216 coupled to bus 202 for transmitting information to, and receiving information from, the Internet. The information may be a video segment, such as an A/V dip for example. An optional user input device 212, e.g. a keypad, remote control, etc., coupled to bus 202 is also included in system 200 of FIG. 2, to provide communication between system 200 and a user. Optional local receiver/source 208, which can be a set top box in one embodiment, is coupled to bus 202 to provide a media signal. Optional local receiver/source 208 can alternatively be located inside display device 218. Optional local/receiver source 208 can allow viewer options such as simultaneous viewing of a segment through a tuner or source that is independent of the dedicated tuners of dedicated service module 210. Thus, the dedicated tuner, e.g. 201, in dedicated service module 210, always provides a dedicated path for a given medial signal.

Bus 202 provides an exemplary coupling configuration of devices in on-site media system 200. Bus 202 is shown as a single bus line for clarity. It is appreciated by those skilled in the art that bus 202 can include subcomponents of specific data lines and/or control lines for the communication of commands and data between appropriate devices. It is further appreciated by those skilled in the art that bus 202 can be a parallel configuration, or a IEEE 1394 configuration, that bus 202 can include numerous gateways, interconnect, and translators, as appropriate for a given application.

It is also appreciated that on-site media system 200 is exemplary only and that the present invention can operate within a number of different media systems including a commercial media system, a general purpose computer system, etc. Furthermore, the present invention is well-suited to using a host of intelligent devices that have similar components as exemplary on-site media system 200.

Referring now to FIG. 3A, a block diagram of a dedicated service module is shown, in accordance with on embodiment of the present invention. Dedicated service module 210, also referred to as a dedicated media device, or a dedicated service cartridge depending upon the configuration, includes a media storage adapter 306, a tuner adapter 308, and interfaces 304a and 304b for tuner adapter 308 and for media storage adapter 306, respectively. Media storage adapter 306 includes appropriate mechanical and electrical components to accommodate a dedicated media storage device. Similarly, tuner adapter 308 includes appropriate mechanical and electrical components to accommodate a dedicated tuner. Media storage adapter 306 is coupled to tuner adapter 308 via one or more dedicated tuners, e.g. tuner 210a, and one or more dedicated disks, e.g. 203a, respectively coupled together in exclusive pairs, in the present embodiment.

Interface 304a, in turn includes a multiplexed broadcast stream 213a coupled to tuner adapter 308. Interface 304b includes a two-way display device control line 316, which can be coupled to media storage adapter 306 via bus 315. In one embodiment, bus 315 can be coupled to bus 202 of FIG. 2. Interface 304b also includes an optional Internet 304b also includes an optional Internet connection 213b that may be directly coupled to one or more dedicated cartridges, e.g. open slot 313, in one embodiment. In another embodiment, only a dedicated storage device is coupled to optional Internet connection 213b because the Internet connection bypasses the need for a dedicated tuner.

The present embodiment of dedicated service module 210 includes multiple tuners and disks, exclusively coupled to each other as shown. However, the present invention is well-suited to many different configurations. For example, one or more allocated partitions, or portions, of a single disk can be utilized in lieu of separate storage devices, e.g. one hard drive with five partitions replaces five separate hard drives. In yet another embodiment, a "gang" of multiple tuners could be cooperatively shared across a current active receiver, under the assumption that not all of the multiple broadcast signals would want to be tuned and recorded at all times. In this latter embodiment, each broadcast signal can still have a guaranteed capacity of disk storage. This latter embodiment would trade off the cost of a service module with the level of dedicated service desired.

While the present embodiment arranges multiple tuner-storage pairs, e.g. 203a and 201a pair and 203b and 201b pair, in a parallel manner, the present invention is well-suited to alternative coupling arrangements. For example, in one embodiment, tuner-storage pairs may be daisy chained to deliver the multiplex broadcast signal to each dedicated tune.

Bus 315, for providing multiplexed broadcast stream, is conformal to the Institute of Electrical and Electronic Engineers (IEEE) 1394 standard in one embodiment. Furthermore, two-way media/data line 316 is also compatible with the IEEE 1394 standard, in one embodiment.

The connection to the optional local receiver, e.g. optional local receiver/source 208 of FIG. 2 (viz., a tuner in a television or Set Top Box (STB)), enables a viewer to access segmenting from dedicated service module 210 as a set of streams to complement a conventional broadcast from optional local receiver. Furthermore, the present invention is well-suited to using many different configurations of dedicated tuner-storage devices. For example, one or more dedicated media storage devices may be committed to a single dedicated tuner, thus allowing concurrent recording and viewing. Alternative embodiments are provided in subsequent figures.

The present invention also shows one open slot 312 for an additional dedicated tuner-storage pair. However, the present invention is well-suited to providing interactive media device 210 with any number of open slots and any number of installed dedicated tuner-storage pairs.

Additionally, dedicated storage device 210 has a modular interface to media storage adapter 306 and tuner adapter 308 in the present embodiment. That is, the present embodiment of FIG. 3A is a form-factor media tower into which a consumer can plug or unplug dedicated service cartridges, having the dedicated tuners and media storage devices, units.

Referring now to FIG. 3B, multiple designs of a dedicated service module are shown, in accordance with one embodiment of the present invention. Configurations A-C show alternative configurations for a modular embodiment of the dedicated service module, e.g. where the dedicated tuner-disk, pairs are removable cartridges. Configuration A shows a traditional stand alone dedicated service module device. Configurations B shows an integrated dedicated service module that is built-in to a display device. Lastly, configuration C shows a stacked stand alone dedicated service module device. The dedicated tuner-storage pairs can be plugged into a back-plane of any device appropriate for consumer use. The present invention is well-suited to using any other stacking and coupling configuration for a modular dedicated service module. It is appreciated that the integrated service module devices shown in FIG. 3B are exemplary. The present invention is well-suited to a wide range of designs and configurations for the dedicated service module and the cartridge embodiment of the dedicated tuner-disk pair.

Referring now to FIG. 4, a block diagram of another configuration of a dedicated service module is shown, in accordance with one embodiment of the present invention. Dedicated service module 310a, also referred to as a dedicated service cartridge, includes a media storage device 402, and a tune 404. In the present embodiment, both the media storage device 402 and the tuner 404 to which it is coupled, are dedicated to a specific content provider. For example, tuner 404 may be preset to receive a broadcast frequency corresponding to a national news broadcaster. In another embodiment, dedicated service module 310a can be generic cartridge that is segmented with tuning instructions suitable to tune in the appropriate broadcast signal, in response to a subscription, or to some other business module.

Tuner 404 of FIG. 4 is coupled to adapter 406 via data line 408 to receive source signal, e.g. a broadcast spectrum. Media storage device 402 and tuner 404 are coupled via control line 410 to adapter 406 to receive instructions to tuner and/or media storage device in accordance with on-site media service software and commands, e.g. via processor 204 and memory 206 of FIG. 2. Media storage device 402 is also coupled to adapter 406 via line 416 to provide media data from media storage device to a media system, such as that shown in FIG. 2. Line 414 provides the dedicated media signal, tuner by tuner 404, to dedicated media storage 402. In another embodiment, data and control can be multiplexed on a single line. Adapter 406 allows dedicated service module 310a to interface with an interactive media system, such as the embodiment shown in FIG. 3A. As mentioned in FIG. 3A, another embodiment of a dedicated service module allows for dedicated Internet access, and thus eliminates the dedicated tuner but retains the dedicated media storage device.

In one embodiment, dedicated service module 310a of FIG. 4 is a modular unit that a consumer can purchase and simply insert to an interactive media system. Media storage device 402 is shown as a single device in FIG. 4. However, the present invention is well-suited to using many different configurations and embodiment. In another embodiment, multiple independent read/write access mechanisms can be adapted to a single recording disk for simultaneous read/write aspects. In the present embodiment, media storage device 402 is a hard drive unit, similar to those used in PCs. However, the present invention is well-suited to using any media recording device, as is appropriate for a given application. Additionally, the tuners and disks of the dedicated service module are capable of recording and delivering a fixed number of streams, e.g. for input and output, as appropriate for the service.

While FIG. 4 provides dedicated tuner-storage device 310a as a removable modular embodiment, it can also be configured as a fixed internal device for incorporation into a display device, such as digital television. Additionally, tuner 404 can be implemented as a digital or an analog device. While FIG. 4 shows a single media storage device allocated to a single dedicated tuner, the present invention is well-suited to different configurations. For example, in lieu of dedicated an entire media storage device to a single dedicated tuner, one embodiment of the present invention dedicates one or more partitions of a common media storage device to a single dedicated tuner. In this manner, the single common storage device can be shared among multiple tuners while still satiating the goal of guaranteed storage capacity for a broadcast signal.

Referring now to FIG. 5, a functional block diagram of an interactive media system including content provider media system and on-site media system is shown, in accordance with one embodiment of the present invention. Interactive media system 500 includes a content provider media system 520, also referred to as content provider, and includes an on-site media system 530.

Content provider media system 520 includes a media content database 504 that provides media content data, as indicated by the arrows, to an editing block 506 and to an encoder engine block 512. Any format of data can be stored in the media content database 504. For example, in one embodiment, the media content data stored in media content database 504 is compliant with the Moving Picture Experts Group-2 (MPEG-2) standard. Media content database 504 also communicates, as shown by arrow, with on-site media service database 502, which in turn provides data to editing block 506. On-site media service database 502 includes metadata, content options, service data and service options, function data and functional options, and interactive data and interactive options, in one embodiment. However, the present invention is well-suited to storing any other type of data that would enhance the on-site media service. These data may be commands, software code, descriptive structures, or other information useful to an on-site media system. Additionally, the granularity of the on-site media service data can range from segment-based to clip based, or shorter time-segments. Besides the data described, the present invention is well-suited to tying any other on-site media service data to the content data in order to provide an on-site media service that provides value to both content provider and viewer.

Editing block 506 can be thought of as the segment director's editing service which takes the raw production data and formats it into a television segment. The communication link between on-site media service database 502 and media content database 504 ties the on-site media service information to the core broadcast segment content, e.g. a core audiovisual news segment. Editing block 506 passes reference information, relating to the media content desired to be transmitted, to cutlist block 510. The service information corresponding to the desired segment content to be transmitted is sent in parallel from editing block 506 to the on-site media service data block 508. The output of blocks 508 and 510 is provided in parallel with the actual content data, referenced in cutlist block 510, from media content database 504, to an encoder block 512 which subsequently provides a media signal to a user, e.g. on-site media system 530. While the present embodiment performs some editing of raw production media data, it still provides a sufficient amount of content data to a local media system to allow the viewer some options, if desired, in the selection of the content.

In one embodiment, encoder block 512 is a transmitter that provides a terrestrial broadcast of media signal 522. However, the present invention is well-suited to any means of transmitting the media signal, such as cable or satellite. The present invention is also well-suited to a wide variety of methods for encoding data for transmission to an on-site media system.

The present embodiment of content provider interactive media system shown in FIG. 5 can be implemented with hardware that includes a processor coupled to a memory for storing instructions and commands and method steps. The hardware implementation would also include a media storage device such as one or more hard drives coupled to the processor, a user input device and a transmitter, all coupled to the processor.

The other component of interactive media system 500 is on-site media systems 530, which can be grouped in different sections for clarity. A first functional section 552 performs data reception in on-site media system 530. A second functional section 554 performs data recording, while a third functional section 556 performs data presentation. In data reception section 552, broadcast signal 522 is first received at a decoder functional block 532 which transmits, as shown by arrows, the decoded signal to content manager block 536. An optional information source, such as Internet data block 534, can provide additional data that can be integrated in the functional stages of on-site media system 530. Thus, for example, Internet data block 534 can automatically cache a specific Web content prior to viewer presentation in order to give the viewer a sense of instant access during the presentation. Additionally, a back channel can be enabled either via this Internet block or through other mechanisms, such as a cable modem for cable-based broadcast.

Decoder 532 can be a dedicated tuner, such as the dedicated tuner 404 shown in FIG. 4, or the dedicated tuner portion, e.g. tuner 201a of FIG. 3A. Content manager block 536 provides a filtering function on the decoded media signal. That is, content manager block 536 segregates content from on-site media service data and sends them to respective storage devices, e.g. media content hard drive 538 for content data, and on-site media service drive 540. These separate drives are figurative in one embodiment as both signals can be tied together by writing them to a single disk. Content manager block 530 can also implement a first-level content filter that, according to subscription software, user profile, or viewer-selected options, decides whether to record the media signal, e.g. to media content hard drive 538, or to ignore the signal and not record it. Content manager can be implemented using instructions stored on memory 206 and implemented on processor 204 of on-site media hardware system 200, as shown in FIG. 2, in one embodiment.

The next stage of on-site media system 530 is the data presentation formatting stage 556. In this stage, on-site media service information is received from on-site media service drive 540 at showflow engine block 544. Showflow engine block 544 formats and implements on-site media service data for subsequent integration with content data. Then showflow engine block 544 provides the processed data to rendering engine 542. Similarly, content data is received from dedicated media content hard drive 538 at rendering engine 542. Rendering engine 542 performs the formatting and integration of the desired images to be viewed on display device, in one embodiment. A wide variety of media elements, e.g. video, audio, text, etc., may be combined in many different formats to provide a desired composite presentation for viewing on display device 546. For example, electronic segmenting guide (EPG) information may be more dynamically formatted, including clips from the actual segment. That is, the EPG can be enabled via the present invention to allow users to view previews of any segment for which a commercial has been broadcast instead of the typical text tile of a segment in a two-dimensional grid. In another embodiment, a user segment interface that presents menus, media clips, or other data, may be overlaid onto content images for display device 546.

Rendering engine 542 transfers presentation data to display device 546 for the final stage of presenting display 558. User input is communicated back to rendering engine 542 via line 548. User input can be received via push-button selection on set-top box or a television unit, or from an other source, such as a remote control input.

While the present embodiment only shows a single decoder 532 and a single dedicated hard drive, e.g. disk set 538 and 540, dedicated for a single media signal, e.g. signal 522, the present invention is capable of functional blocks for multiple units in parallel, in one embodiment. In another embodiment, memory and processor resources (e.g. memory 206 and processor 204 of FIG. 2) are utilized to accomplish engine functions (e.g. rendering engine 542, content manager function 536, and showflow engine 544, as well as other engines not shown). It is appreciated that the engine functions performed on memory and processor are accomplished in a serial manner if only a single processor is implemented. In another embodiment, multiple processors can be utilized to accomplish dedicated functions in on-site media system 530, in a parallel or serial fashion.

Referring now to FIG. 6A, a diagram of a fine-grain media stream 600 is shown, in accordance with one embodiment of the present invention. FIG. 6A illustrates segment data and duration as a physical block 601. Segment block 601 has a time span 606 over which content is presented. The present invention provides a very fine grain metadata tagging for segment content. For example, FIG. 6A shows metadata labeling at a clip level, e.g. metadata tag 603a for clip content 602a having a time span of 604. This is repeated for any quantity of clips within the segment. The present invention is well-suited to using any scale of metadata labeling, as appropriate for an application. For example, tagging clips with metadata would be appropriate for some news segments having many short clips in the segment. By using the fine-grain metadata tagging, the present invention provides the necessary data and infrastructure for an on-site media service to provide enhanced services and functions to a viewer. One such feature would be fine-grain navigation and compilation of media content related to a specific viewer interest or inquiry.

Referring to now to FIG. 6B, a television view generated using the interactive media system is shown, in accordance with one embodiment of the present invention. Television view 650 is shown on a conventional television 658. Segment user interface 654 is provided along with a presenter 656 image, both of which are overlaid onto a core media content 652, e.g., an airplane story clip. The present invention provides the appropriate audio and associated data corresponding to the video data. Notably, the content-provider can exercise editorial content over when and what service, function, and content options are available to the viewer, e.g. through the segment user interface. This allows greater choice to a viewer while still satisfying a business model for the content provider.

Television view 650 illustrates how the content provider, e.g. broadcaster, can control some of the recording, management formatting, and presentation of media to a user. Similarly, television view 650 illustrates how the viewer can interact with predetermined menu options to accomplish desired services and features, e.g. viewing segment user interface for alternative clips, selecting a function from a menu in segment user interface 654, or adjusting the presenter format 656. The present invention is well-suited to using any combination of these, and other, presentation formats and contents to present an on-site media service to the viewer, and or user. Furthermore, each of the several on-site media services described can be implemented independent of each other, or in any combination. The same independence exists for the interactive feature of the on-site media service.

The method can be implemented in an environment with software controlled access to streamed media, where descriptive Metadata is used to relate teaser AV material to full length versions of the corresponding content.

These and other embodiments of the present invention may be realized in accordance with these teachings and it should be evident that various modifications and changes may be made in these teachings without departing from the broader spirit and scope of the invention. The specification and drawings are, accordingly, to be regarded in an illustrative rather than restrictive sense and the invention measured only in terms of the claims.