tag:blogger.com,1999:blog-37767165553374726672024-02-08T08:50:46.186-08:00United States PatentsTotally cool inventions and patents courtesy of PasiontoLearn.comKevin Andrew Woolseyhttp://www.blogger.com/profile/01268449682429697653noreply@blogger.comBlogger6125tag:blogger.com,1999:blog-3776716555337472667.post-23824110105794442222012-06-06T00:21:00.001-07:002012-06-06T00:21:52.985-07:00Google New Method Access Using Images Patent Filed<div dir="ltr" style="text-align: left;" trbidi="on">
<div style="text-align: center;">
<a href="http://affiliate.godaddy.com/redirect/B8A87F11706DC7D5B3946241A7DA89557D46409097ECF947A2DFF655D903199F026448C2C07E27BE72E7636708A38D734420FE00EF6370F990869C132A56BB0D" target="_blank"><img alt="Build your Wesite, Online Store, Blog and More - 10% off Your Order at GoDaddy.com" border="0" height="60" src="http://affiliate.godaddy.com/ads/B8A87F11706DC7D5B3946241A7DA89557D46409097ECF947A2DFF655D903199F026448C2C07E27BE72E7636708A38D734420FE00EF6370F990869C132A56BB0D" width="468" /></a>
</div>
<div style="text-align: center;">
<br /></div>
<hr style="text-align: center;" />
<table style="text-align: center;"><tbody>
<tr><td style="text-align: left;" width="50%"><b>United States Patent</b></td><td style="text-align: right;" width="50%"><b>8,196,198</b></td></tr>
<tr><td style="text-align: left;" width="50%"><b>Eger</b></td><td style="text-align: right;" width="50%"><b>June 5, 2012</b></td></tr>
</tbody></table>
<hr style="text-align: center;" />
<div style="text-align: center;">
<span style="font-size: x-large; text-align: -webkit-auto;"><b>Access Using Images </b></span></div>
<br />
<br />
<br />
<center><b>Abstract</b></center><br />
<div style="text-align: -webkit-auto;">
A computer-implemented method may include presenting to a user multiple images, multiple identifiers and a challenge to select one of the identifiers for each of the images, receiving the selected identifiers from the user, and providing access to a computing service based on a comparison of the selected identifiers to an answer to the challenge.</div>
<hr style="text-align: -webkit-auto;" />
<table><tbody>
<tr><td align="LEFT" valign="TOP" width="10%">Inventors:</td><td align="LEFT" width="90%"><b>Eger; David Thomas</b> (Burlingame, CA)</td></tr>
<tr><td align="LEFT" valign="TOP" width="10%">Assignee:</td><td align="LEFT" width="90%"><b><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&co1=AND&d=PTXT&s1=google.ASNM.&OS=AN/google&RS=AN/google#h0" name="h1"></a><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&co1=AND&d=PTXT&s1=google.ASNM.&OS=AN/google&RS=AN/google#h2"></a><b><i>Google</i></b> Inc.</b> (Mountain View, CA) </td></tr>
<tr><td align="LEFT" nowrap="" valign="TOP" width="10%">Appl. No.:</td><td align="LEFT" width="90%"><b>12/345,265</b></td></tr>
<tr><td align="LEFT" valign="TOP" width="10%">Filed:</td><td align="LEFT" width="90%"><b>December 29, 2008</b></td></tr>
</tbody></table>
<hr style="text-align: -webkit-auto;" />
<div style="text-align: -webkit-auto;">
</div>
<table><tbody>
<tr><td align="LEFT" valign="TOP" width="40%"><b>Current U.S. Class:</b></td><td align="RIGHT" valign="TOP" width="80%"><b>726/21</b> ; 726/2; 726/7</td></tr>
<tr><td align="LEFT" valign="TOP" width="40%"><b>Current International Class:</b></td><td align="RIGHT" valign="TOP" width="80%">G06F 7/04 (20060101)</td></tr>
<tr><td align="LEFT" valign="TOP" width="40%"><b>Field of Search:</b></td><td align="RIGHT" valign="TOP" width="80%">726/2,4,17,21,27 713/155-159,168-186 380/247-250 705/44</td></tr>
</tbody></table>
<br />
<hr style="text-align: -webkit-auto;" />
<br />
<br />
<center><b>References Cited <a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2Fsearch-adv.htm&r=0&f=S&l=50&d=PALL&Query=ref/8196198">[Referenced By]</a></b></center><br />
<hr style="text-align: -webkit-auto;" />
<br />
<br />
<center><b>U.S. Patent Documents</b></center><br />
<table><tbody>
<tr><td width="33%"></td><td width="33%"></td><td width="34%"></td></tr>
<tr><td align="left"><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&d=PALL&RefSrch=yes&Query=PN%2F6128397">6128397</a></td><td align="left">October 2000</td><td align="left">Baluja et al.</td></tr>
<tr><td align="left"><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&d=PALL&RefSrch=yes&Query=PN%2F6195698">6195698</a></td><td align="left">February 2001</td><td align="left">Lillibridge et al.</td></tr>
<tr><td align="left"><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&d=PALL&RefSrch=yes&Query=PN%2F6295387">6295387</a></td><td align="left">September 2001</td><td align="left">Burch</td></tr>
<tr><td align="left"><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&d=PALL&RefSrch=yes&Query=PN%2F6956966">6956966</a></td><td align="left">October 2005</td><td align="left">Steinberg</td></tr>
<tr><td align="left"><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&d=PALL&RefSrch=yes&Query=PN%2F7149899">7149899</a></td><td align="left">December 2006</td><td align="left">Pinkas</td></tr>
<tr><td align="left"><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&d=PALL&RefSrch=yes&Query=PN%2F7266693">7266693</a></td><td align="left">September 2007</td><td align="left">Potter</td></tr>
<tr><td align="left"><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&d=PALL&RefSrch=yes&Query=PN%2F7653944">7653944</a></td><td align="left">January 2010</td><td align="left">Chellapilla</td></tr>
<tr><td align="left"><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&d=PALL&RefSrch=yes&Query=PN%2F7656402">7656402</a></td><td align="left">February 2010</td><td align="left">Abraham et al.</td></tr>
<tr><td align="left"><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&d=PALL&RefSrch=yes&Query=PN%2F7841940">7841940</a></td><td align="left">November 2010</td><td align="left">Bronstein</td></tr>
<tr><td align="left"><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&d=PALL&RefSrch=yes&Query=PN%2F7891005">7891005</a></td><td align="left">February 2011</td><td align="left">Baluja et al.</td></tr>
<tr><td align="left"><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&d=PALL&RefSrch=yes&Query=PN%2F7908223">7908223</a></td><td align="left">March 2011</td><td align="left">Klein et al.</td></tr>
<tr><td align="left"><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&d=PALL&RefSrch=yes&Query=PN%2F7921454">7921454</a></td><td align="left">April 2011</td><td align="left">Cerruti</td></tr>
<tr><td align="left"><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&d=PALL&RefSrch=yes&Query=PN%2F7929805">7929805</a></td><td align="left">April 2011</td><td align="left">Wang et al.</td></tr>
<tr><td align="left"><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&d=PALL&RefSrch=yes&Query=PN%2F8019127">8019127</a></td><td align="left">September 2011</td><td align="left">Misra</td></tr>
<tr><td align="left"><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&d=PALL&RefSrch=yes&Query=PN%2F8073912">8073912</a></td><td align="left">December 2011</td><td align="left">Kaplan</td></tr>
<tr><td align="left"><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&d=PALL&RefSrch=yes&Query=PN%2F8090219">8090219</a></td><td align="left">January 2012</td><td align="left">Gossweiler et al.</td></tr>
<tr><td align="left"><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&d=PALL&RefSrch=yes&Query=PN%2F8103960">8103960</a></td><td align="left">January 2012</td><td align="left">Hua et al.</td></tr>
<tr><td align="left"><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&d=PALL&RefSrch=yes&Query=PN%2F8136167">8136167</a></td><td align="left">March 2012</td><td align="left">Gossweiler et al.</td></tr>
<tr><td align="left"><a href="http://appft1.uspto.gov/netacgi/nph-Parser?TERM1=20020141639&Sect1=PTO1&Sect2=HITOFF&d=PG01&p=1&u=%2Fnetahtml%2FPTO%2Fsrchnum.html&r=0&f=S&l=50" target="_blank">2002/0141639</a></td><td align="left">October 2002</td><td align="left">Steinberg</td></tr>
<tr><td align="left"><a href="http://appft1.uspto.gov/netacgi/nph-Parser?TERM1=20040073813&Sect1=PTO1&Sect2=HITOFF&d=PG01&p=1&u=%2Fnetahtml%2FPTO%2Fsrchnum.html&r=0&f=S&l=50" target="_blank">2004/0073813</a></td><td align="left">April 2004</td><td align="left">Pinkas et al.</td></tr>
<tr><td align="left"><a href="http://appft1.uspto.gov/netacgi/nph-Parser?TERM1=20040199597&Sect1=PTO1&Sect2=HITOFF&d=PG01&p=1&u=%2Fnetahtml%2FPTO%2Fsrchnum.html&r=0&f=S&l=50" target="_blank">2004/0199597</a></td><td align="left">October 2004</td><td align="left">Libbey et al.</td></tr>
<tr><td align="left"><a href="http://appft1.uspto.gov/netacgi/nph-Parser?TERM1=20050014118&Sect1=PTO1&Sect2=HITOFF&d=PG01&p=1&u=%2Fnetahtml%2FPTO%2Fsrchnum.html&r=0&f=S&l=50" target="_blank">2005/0014118</a></td><td align="left">January 2005</td><td align="left">von Ahn Arellano</td></tr>
<tr><td align="left"><a href="http://appft1.uspto.gov/netacgi/nph-Parser?TERM1=20050065802&Sect1=PTO1&Sect2=HITOFF&d=PG01&p=1&u=%2Fnetahtml%2FPTO%2Fsrchnum.html&r=0&f=S&l=50" target="_blank">2005/0065802</a></td><td align="left">March 2005</td><td align="left">Rui et al.</td></tr>
<tr><td align="left"><a href="http://appft1.uspto.gov/netacgi/nph-Parser?TERM1=20050229251&Sect1=PTO1&Sect2=HITOFF&d=PG01&p=1&u=%2Fnetahtml%2FPTO%2Fsrchnum.html&r=0&f=S&l=50" target="_blank">2005/0229251</a></td><td align="left">October 2005</td><td align="left">Chellapilla et al.</td></tr>
<tr><td align="left"><a href="http://appft1.uspto.gov/netacgi/nph-Parser?TERM1=20060167874&Sect1=PTO1&Sect2=HITOFF&d=PG01&p=1&u=%2Fnetahtml%2FPTO%2Fsrchnum.html&r=0&f=S&l=50" target="_blank">2006/0167874</a></td><td align="left">July 2006</td><td align="left">von Ahn Arellano et al.</td></tr>
<tr><td align="left"><a href="http://appft1.uspto.gov/netacgi/nph-Parser?TERM1=20070130618&Sect1=PTO1&Sect2=HITOFF&d=PG01&p=1&u=%2Fnetahtml%2FPTO%2Fsrchnum.html&r=0&f=S&l=50" target="_blank">2007/0130618</a></td><td align="left">June 2007</td><td align="left">Chen</td></tr>
<tr><td align="left"><a href="http://appft1.uspto.gov/netacgi/nph-Parser?TERM1=20070201745&Sect1=PTO1&Sect2=HITOFF&d=PG01&p=1&u=%2Fnetahtml%2FPTO%2Fsrchnum.html&r=0&f=S&l=50" target="_blank">2007/0201745</a></td><td align="left">August 2007</td><td align="left">Wang et al.</td></tr>
<tr><td align="left"><a href="http://appft1.uspto.gov/netacgi/nph-Parser?TERM1=20080050018&Sect1=PTO1&Sect2=HITOFF&d=PG01&p=1&u=%2Fnetahtml%2FPTO%2Fsrchnum.html&r=0&f=S&l=50" target="_blank">2008/0050018</a></td><td align="left">February 2008</td><td align="left">Koziol</td></tr>
<tr><td align="left"><a href="http://appft1.uspto.gov/netacgi/nph-Parser?TERM1=20080216163&Sect1=PTO1&Sect2=HITOFF&d=PG01&p=1&u=%2Fnetahtml%2FPTO%2Fsrchnum.html&r=0&f=S&l=50" target="_blank">2008/0216163</a></td><td align="left">September 2008</td><td align="left">Pratte et al.</td></tr>
<tr><td align="left"><a href="http://appft1.uspto.gov/netacgi/nph-Parser?TERM1=20080244700&Sect1=PTO1&Sect2=HITOFF&d=PG01&p=1&u=%2Fnetahtml%2FPTO%2Fsrchnum.html&r=0&f=S&l=50" target="_blank">2008/0244700</a></td><td align="left">October 2008</td><td align="left">Osborn et al.</td></tr>
<tr><td align="left"><a href="http://appft1.uspto.gov/netacgi/nph-Parser?TERM1=20090094687&Sect1=PTO1&Sect2=HITOFF&d=PG01&p=1&u=%2Fnetahtml%2FPTO%2Fsrchnum.html&r=0&f=S&l=50" target="_blank">2009/0094687</a></td><td align="left">April 2009</td><td align="left">Jastrebski</td></tr>
<tr><td align="left"><a href="http://appft1.uspto.gov/netacgi/nph-Parser?TERM1=20090113294&Sect1=PTO1&Sect2=HITOFF&d=PG01&p=1&u=%2Fnetahtml%2FPTO%2Fsrchnum.html&r=0&f=S&l=50" target="_blank">2009/0113294</a></td><td align="left">April 2009</td><td align="left">Sanghavi et al.</td></tr>
<tr><td align="left"><a href="http://appft1.uspto.gov/netacgi/nph-Parser?TERM1=20090138468&Sect1=PTO1&Sect2=HITOFF&d=PG01&p=1&u=%2Fnetahtml%2FPTO%2Fsrchnum.html&r=0&f=S&l=50" target="_blank">2009/0138468</a></td><td align="left">May 2009</td><td align="left">Kurihara</td></tr>
<tr><td align="left"><a href="http://appft1.uspto.gov/netacgi/nph-Parser?TERM1=20090138723&Sect1=PTO1&Sect2=HITOFF&d=PG01&p=1&u=%2Fnetahtml%2FPTO%2Fsrchnum.html&r=0&f=S&l=50" target="_blank">2009/0138723</a></td><td align="left">May 2009</td><td align="left">Nyang</td></tr>
<tr><td align="left"><a href="http://appft1.uspto.gov/netacgi/nph-Parser?TERM1=20090150983&Sect1=PTO1&Sect2=HITOFF&d=PG01&p=1&u=%2Fnetahtml%2FPTO%2Fsrchnum.html&r=0&f=S&l=50" target="_blank">2009/0150983</a></td><td align="left">June 2009</td><td align="left">Saxena et al.</td></tr>
<tr><td align="left"><a href="http://appft1.uspto.gov/netacgi/nph-Parser?TERM1=20090235178&Sect1=PTO1&Sect2=HITOFF&d=PG01&p=1&u=%2Fnetahtml%2FPTO%2Fsrchnum.html&r=0&f=S&l=50" target="_blank">2009/0235178</a></td><td align="left">September 2009</td><td align="left">Cipriani et al.</td></tr>
<tr><td align="left"><a href="http://appft1.uspto.gov/netacgi/nph-Parser?TERM1=20090249476&Sect1=PTO1&Sect2=HITOFF&d=PG01&p=1&u=%2Fnetahtml%2FPTO%2Fsrchnum.html&r=0&f=S&l=50" target="_blank">2009/0249476</a></td><td align="left">October 2009</td><td align="left">Seacat et al.</td></tr>
<tr><td align="left"><a href="http://appft1.uspto.gov/netacgi/nph-Parser?TERM1=20090249477&Sect1=PTO1&Sect2=HITOFF&d=PG01&p=1&u=%2Fnetahtml%2FPTO%2Fsrchnum.html&r=0&f=S&l=50" target="_blank">2009/0249477</a></td><td align="left">October 2009</td><td align="left">Punera</td></tr>
<tr><td align="left"><a href="http://appft1.uspto.gov/netacgi/nph-Parser?TERM1=20090319274&Sect1=PTO1&Sect2=HITOFF&d=PG01&p=1&u=%2Fnetahtml%2FPTO%2Fsrchnum.html&r=0&f=S&l=50" target="_blank">2009/0319274</a></td><td align="left">December 2009</td><td align="left">Gross</td></tr>
<tr><td align="left"><a href="http://appft1.uspto.gov/netacgi/nph-Parser?TERM1=20090325696&Sect1=PTO1&Sect2=HITOFF&d=PG01&p=1&u=%2Fnetahtml%2FPTO%2Fsrchnum.html&r=0&f=S&l=50" target="_blank">2009/0325696</a></td><td align="left">December 2009</td><td align="left">Gross</td></tr>
<tr><td align="left"><a href="http://appft1.uspto.gov/netacgi/nph-Parser?TERM1=20090328150&Sect1=PTO1&Sect2=HITOFF&d=PG01&p=1&u=%2Fnetahtml%2FPTO%2Fsrchnum.html&r=0&f=S&l=50" target="_blank">2009/0328150</a></td><td align="left">December 2009</td><td align="left">Gross</td></tr>
<tr><td align="left"><a href="http://appft1.uspto.gov/netacgi/nph-Parser?TERM1=20100077210&Sect1=PTO1&Sect2=HITOFF&d=PG01&p=1&u=%2Fnetahtml%2FPTO%2Fsrchnum.html&r=0&f=S&l=50" target="_blank">2010/0077210</a></td><td align="left">March 2010</td><td align="left">Broder et al.</td></tr>
<tr><td align="left"><a href="http://appft1.uspto.gov/netacgi/nph-Parser?TERM1=20100100725&Sect1=PTO1&Sect2=HITOFF&d=PG01&p=1&u=%2Fnetahtml%2FPTO%2Fsrchnum.html&r=0&f=S&l=50" target="_blank">2010/0100725</a></td><td align="left">April 2010</td><td align="left">Ozzie et al.</td></tr>
<tr><td align="left"></td></tr>
</tbody></table>
<br />
<br />
<center><b>Foreign Patent Documents</b></center><br />
<table><tbody>
<tr><td></td><td></td><td></td><td></td><td></td><td></td></tr>
<tr><td align="left"></td><td align="left">2008/091675</td><td></td><td align="left">Jul., 2008</td><td></td><td align="left">WO</td></tr>
<tr><td align="left"></td></tr>
</tbody></table>
<br />
<br />
<br />
<center><b>Other References</b></center><br />
<table><tbody>
<tr><td><align=left><br />Chellapilla, K., et al. "Computers Beat Humans at Single Character Recognition in Reading Based Human Interaction Proofs (HIPs)," in Proceedings of the 2nd Conference on Email and Anti-Spam, (CEAS) 2005. cited by other .<br />Rowley, H., et al. "Rotation Invariant Neural Network-Based Face Detection," CMU-CS-97-201, Dec. 1997. cited by other .<br />Fu, H., et al. "Upright Orientation of Man-Made Objects," SIGGRAPH 2008, 35th International Conference and Exhibition on Computer Graphics and Interactive Techniques, Aug. 2008. cited by other .<br />Lopresti, D., "Leveraging the CAPTCHA Problem," 2nd Int'l Workshop on Human Interactive Proofs, Bethleham, PA, May 2005. cited by other .<br />Rowley, H., et al. "Neural Network-Based Face Detection," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 20, No. 1, Jan. 1998. cited by other .<br />Mori, G., et al. "Recognizing Objects in Adversarial Clutter: Breaking a Visual CAPTCHA," Proceedings of Computer Vision and Pattern Recognition, 2003. cited by other .<br />Rui, Y., et al. "Characters or Faces: A User Study on Ease of Use for HIPs," Lecture Notes in Computer Science, vol. 3517, pp. 53-65, Springer Berlin, 2005. cited by other .<br />Vailaya, A., et al. "Automatic Image Orientation Detection," IEEE Transactions on Image Processing, vol. 11, No. 7, pp. 746-755, Jul. 2002. cited by other .<br />Baluja, S., et al. "Large Scale Performance Measurement of Content-Based Automated Image-Orientation Detection," IEEE Conference on Image Processing, vol. 2, pp. 514-517, Sep. 11-14, 2005. cited by other .<br />Viola, P., et al. "Rapid Object Detection Using a Boosted Cascade of Simple Features," Proceedings of Computer Vision and Pattern Recognition, pp. 511-518, 2001. cited by other .<br />Von Ahn, L., et al. "Telling Humans and Computers Apart (Automatically) or How Lazy Cryptographers do AI," Communications on the ACM, vol. 47, No. 2, Feb. 2004. cited by other .<br />Von Ahn, L., et al. "CAPTCHA: Using Hard AI Problems for Security," Advances in Cryptology--EUROCRYPT 2003, Springer Berlin, 2003. cited by other .<br />Von Ahn, L., et al. "Labeling Images With a Computer Game," Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ACM, pp. 319-326, Vienna, Austria, 2004. cited by other .<br />Von Ahn, L., et al. "Improving Accessibility of the Web With a Computer Game," Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ACM, pp. 79-82, Montreal, Quebec, Canada, 2006. cited by other .<br />Von Ahn, L. "Games With a Purpose," IEEE Computer, pp. 96-98, Jun. 2006. cited by other .<br />Wu, V., et al. "Textfinder: An Automatic System to Detect and Recognize Text in Images," Computer Science Department, Univ. of Massachusetts, Nov. 18, 1997. cited by other .<br />Wu, V., et al. "Finding Text in Images," Proceedings of the 2nd ACM Int'l Conf. on Digital Libraries, 1997. cited by other .<br />Zhang, L., et al. "Boosting Image Orientation Detection With Indoor vs. Outdoor Classification," IEEE Workshop on Application of Computer Vision, pp. 95-99, Dec. 2002. cited by other .<br />Elson, J., et al. "Asirra: A CAPTCHA that Exploits Interest-Aligned Manual Image Categorization," CCS '07, 9 pages, Oct. 2007. cited by other .<br />Praun, E., et al. "Lapped Textures," ACM SIGGRAPH 2000, 6 pages, 2000. cited by other .<br />Adamchak, et al, "A Guide to Monitoring and Evaluating Adolescent Reproductive Health Programs", Pathfinder International, Focus on Young Adults, 2000, pp. 265-274. cited by other .<br />Siegle, D., "Sample Size Calculator", Neag School of Education--University of Connecticut, retrieved on Sep. 18, 2008, from http://www.gifted.uconn.edu/siegle/research/Samples/samplecalculator.htm, 2 pages. cited by other .<br />"Sampling Information", Minnesota Center for Survey Research--University of Minnesota, 2007, 4 pages. cited by other .<br />U.S. Appl. No. 12/256,827, filed Oct. 23, 2008. cited by other .<br />U.S. Appl. No. 12/254,312, filed Oct. 20, 2008. cited by other .<br />U.S. Appl. No. 12/486,714, filed Jun. 17, 2009. cited by other .<br />U.S. Appl. No. 12/345,265, filed Dec. 29, 2008. cited by other .<br />U.S. Appl. No. 12/254,325, filed Oct. 20, 2008. cited by other .<br />Chew, et al., "Collaborative Filtering CAPTCHA's", HIP 2005, LNCS 3517, May 20, 2005, pp. 66-81. cited by other .<br />Extended EP Search Report for EP Application No. 08713263.5, mailed Feb. 4, 2011, 9 pages. cited by other .<br />Lopresti, "Leveraging the CAPTCHA Problem", HIP 2005, LNCS 3517, May 20, 2005, pp. 97-110. cited by other .<br />Shirali-Shahrea, "Collage CAPTCHA", IEEE 2007, 4 pages. cited by other .<br />Shirali-Shahrea, "Online Collage CAPTCHA", WIAMIS '07: Eight International Workshop on Image Analysis for Multimedia Interactive Services, 2007, 4 pages. cited by other .<br />Xu, et al., "Mandatory Human participation: a new authentication scheme for building secure systems", Proceedings of the 12th International Conference on Computer Communications and Networks, Oct. 20, 2003, pp. 547-552. cited by other .<br />"Figure", The American Heritage Dictionary of the English Language, 2007, retrieved on Aug. 13, 2011 from http://www.credoreference.com/entry/hmdictenglang/figure, 4 pages. cited by other .<br />First Office Action for Chinese Patent Application No. 200880002917.8 (with English Translation), mailed May 12, 2011, 7 pages. cited by other .<br />Non-Final Office Action for U.S. Appl. No. 12/606,465, mailed Aug. 19, 2011, 25 pages. cited by other .<br />Non-Final Office Action for U.S. Appl. No. 12/254,325, mailed Sep. 1, 2011, 17 pages. cited by other .<br />Restriction Requirement for U.S. Appl. No. 12/254,312, mailed Sep. 14, 2011, 5 pages. cited by other .<br />Restriction Requirement Response for U.S. Appl. No. 12/254,312, filed Oct. 14, 2011, 1 page. cited by other .<br />Notice of Allowance for U.S. Appl. No. 12/254,312, mailed Nov. 7, 2011, 19 pages. cited by other .<br />Office Action for European Application No. 08713263.5, mailed Dec. 23, 2011, 4 pages. cited by other .<br />Final Office Action for U.S. Appl. No. 12/254,325, mailed Feb. 10, 2012, 15 pages. cited by other .<br />Non-Final Office Action for U.S. Appl. No. 12/486,714, mailed Mar. 2, 2012, 16 pages. cited by other.</align=left></td></tr>
</tbody></table>
<br />
<i style="text-align: -webkit-auto;">Primary Examiner:</i><span style="background-color: white; text-align: -webkit-auto;"> Zand; Kambiz </span><br />
<i style="text-align: -webkit-auto;">Assistant Examiner:</i><span style="background-color: white; text-align: -webkit-auto;"> Mohammadi; Fahimeh </span><br />
<i style="text-align: -webkit-auto;">Attorney, Agent or Firm:</i><span style="background-color: white; text-align: -webkit-auto;"> </span><coma style="text-align: -webkit-auto;">Brake Hughes Bellermann LLP</coma><br />
<hr />
<br />
<center><b><i>Claims</i></b></center><br />
<hr />
<br />
<br />
What is claimed is:<br />
<br />
1. A computer-implemented method, comprising: presenting to a user multiple images, multiple identifiers and a challenge to select one of the identifiers for each of the images, wherein: the images represent objects that are identifiable by a specific identifier, and more identifiers than images are presented in a single access attempt with the resented identifiers including the specific identifiers that identify the presented images and non-specific identifiers that do not identify the presented images; receiving the selected identifiers from the user from among the presented identifiers; and providing access to a computing service based on a comparison of the selected identifiers to an answer to the challenge when the selected identifiers match the specific identifiers to the presented images.<br />
<br />
2. The computer-implemented method as in claim 1 wherein the images are three dimensional models.<br />
<br />
3. The computer-implemented method as in claim 1 wherein the images are randomly textured, three dimensional models.<br />
<br />
4. The computer-implemented method as in claim 1 wherein the images are three dimensional models with each of the three dimensional models set against a separate, randomly generated background.<br />
<br />
5. The computer-implemented method as in claim 1 wherein the images are randomly rotated, three dimensional models.<br />
<br />
6. The computer-implemented method as in claim 1 wherein the images are randomly colored, three dimensional models.<br />
<br />
7. The computer-implemented method as in claim 1 wherein the images are randomly textured, three dimensional models with each of the three dimensional models set against a separate, randomly generated background.<br />
<br />
8. The computer-implemented method as in claim 1 wherein at least two times more of the identifiers are presented than the images.<br />
<br />
9. The computer-implemented method as in claim 1 wherein at least three times more of the identifiers are presented than the images.<br />
<br />
10. The computer-implemented method as in claim 1 wherein providing access to the computing service comprises unlocking a mobile computing device.<br />
<br />
11. The computer-implemented method as in claim 1 wherein providing access to the computing service comprises serving to the user a web page.<br />
<br />
12. A computer-readable storage device having recorded and stored thereon instructions that, when executed, perform the actions of: presenting to a user multiple images, multiple identifiers and a challenge to select one of the identifiers for each of the images, wherein: the images represent objects that are identifiable by a specific identifier, and more identifiers than images are presented in a single access attempt with the presented identifiers including the specific identifiers that identify the presented images and non-specific identifiers that do not identify the presented images; receiving the selected identifiers from the user from among the presented identifiers; and providing access to a computing service based on a comparison of the selected identifiers to an answer to the challenge when the selected identifiers match the specific identifiers to the presented images.<br />
<br />
13. The computer-readable storage device of claim 12 wherein the images are three dimensional models.<br />
<br />
14. The computer-readable storage device of claim 12 wherein the images are randomly textured, three dimensional models.<br />
<br />
15. The computer-readable storage device of claim 12 wherein the images are three dimensional models with each of the three dimensional models set against a separate, randomly generated background.<br />
<br />
16. The computer-readable storage device of claim 12 wherein the images are randomly rotated, three dimensional models.<br />
<br />
17. The computer-readable storage device of claim 12 wherein the images are randomly colored, three dimensional models.<br />
<br />
18. The computer-readable storage device of claim 12 wherein the images are randomly textured, three dimensional models with each of the three dimensional models set against a separate, randomly generated background.<br />
<br />
19. The computer-readable storage device of claim 12 wherein providing access to the computing service comprises unlocking a mobile computing device.<br />
<br />
20. The computer-readable storage device of claim 12 wherein providing access to the computing service comprises serving to the user a web page.<br />
<br />
21. A computer-implemented access control system, comprising: one or more servers that are arranged and configured to: present to a user multiple images, multiple identifiers and a challenge to select one of the identifiers for each of the images, wherein: the images represent objects that are identifiable by a specific identifier, and more identifiers than images are presented in a single access attempt with the presented identifiers including the specific identifiers that identify the presented images and non-specific identifiers that do not identify the presented images; receive the selected identifiers from the user from among the presented identifiers; and provide access to a computing service based on a comparison of the selected identifiers to an answer to the challenge when the selected identifiers match the specific identifiers to the presented images.<br />
<br />
22. The system of claim 21 wherein the images are randomly textured, three dimensional models with each of the three dimensional models set against a separate, randomly generated background.<br />
<br />
23. The system of claim 21 wherein the servers are arranged and configured to provide access to the computing service by serving to the user a web page.<br />
<br />
24. A computer-implemented method, comprising: presenting to a user multiple images, multiple identifiers and a challenge to select one of the identifiers for each of the images, wherein: the images represent objects that are identifiable by a specific identifier, and more identifiers than images are presented in a single access attempt with the presented identifiers including the specific identifiers that identify the presented images and non-specific identifiers that do not identify the presented images; receiving the selected identifiers from the user from among the presented identifiers; and providing access to an electronic device based on a comparison of the selected identifiers to an answer to the challenge when the selected identifiers match the specific identifiers to the presented images.<br />
<br />
25. The computer-implemented method as in claim 24 wherein the images are randomly textured, three dimensional models with each of the three dimensional models set against a separate, randomly generated background.<br />
<br />
26. The computer-implemented method as in claim 24 wherein providing access to the electronic device comprises unlocking a music device.<br />
<br />
27. The computer-implemented method as in claim 24 wherein providing access to the electronic device comprises unlocking a game device.<br />
<hr />
<br />
<center><b><i>Description</i></b></center><br />
<hr />
<br />
<br />
TECHNICAL FIELD<br />
<br />
This document relates to systems and techniques for providing access to computing resources based on user responses to images.<br />
<br />
BACKGROUND<br />
<br />
Computer security is becoming an ever more important feature of computing systems. As users take their computers with them in the form of laptops, palmtops, and smart phones, it becomes desirable to lock such mobile computers from access by third parties. Also, as more computing resources on servers are made available over the Internet, and thus theoretically available to anyone, it becomes more important to ensure that only legitimate users, and not hackers or other fraudsters, are using the resources.<br />
<br />
Computer security is commonly provided by requiring a user to submit credentials in the form of a password or pass code. For example, a mobile device may lock after a set number of minutes of inactivity, and may require a user to type a password that is known only to them in order to gain access to the services on the device (or may provide access to limited services without a password). In a similar manner, a web site may require a user to enter a password before being granted access. Also, certain web sites may require potential users to enter a term that is displayed to the users in an obscured manner so that automated machines cannot access the web sites for proper or improper purposes (e.g., to overload the web site servers). Such techniques may be commonly referenced as CAPTCHA's (Completely Automated Public Turing test to tell Computers and Humans Apart).<br />
<br />
SUMMARY<br />
<br />
This document describes systems and techniques that may be used to limit access to computing services, which, throughout this document, includes computing devices, electronic devices (e.g., music devices, game devices, etc.) and computing services (e.g., online computing services, web pages, etc.). In general, multiple images are shown to a user along with multiple identifiers, and a challenge may require the user to select the appropriate identifier for each of the images to gain access. For example, the images may be objects and the identifiers may be names of objects. More identifiers than images may be shown to the user such that the user has more identifiers to select from to associate with each of the images. If the user selects the appropriate identifier for each of the images, then access is granted. Such an example could be used in a CAPTCHA system to block access by automated computing systems, but permit access by human users.<br />
<br />
In one exemplary implementation, the images may be three dimensional models. Also, the three dimensional (3D) models may be generated on the fly as requests for access are received. Many different variations of the same images may be presented to the user. For example, if the images presented are 3D models, the 3D models may be colored, textured, rotated and/or set against various backgrounds to achieve many different variations of the same 3D models. In this manner, a small corpus of labeled 3D models may be used. Although a small corpus of labeled 3D models may be used, the number of potential variations is great and does not have to rely on an enormous corpus of labeled data to provide the necessary variation against attackers, who might attempt to label a corpus of stock photos or images.<br />
<br />
Multiple images also may be displayed to increase the level of security (because it is much harder to label three or four or six images by guessing than it is to label one). Also, the images may be pre-screened so that only images that are very difficult for a computing system to automatically label with an identifier are selected.<br />
<br />
In certain implementations, such systems and techniques may provide one or more advantages. For example, using multiple images such as 3D models that can be colored, textured, rotated and/or set against various backgrounds along with more identifiers to select from than images can provide for a number of different inputs so as to provide relatively high security. The systems and techniques may be presented to a user on devices that use a touch screen such that the user can make identifier selections without using a keyboard or mouse. It also permits the user to enter a pass code with the use of a keyboard. Such an approach may be particularly useful for touch screen devices such as mobile smart phones, where a keyboard is hidden during normal use of the device. Also, image-based access may provide a more pleasing interface for users of computing devices, so that the users are more likely to use or remember a device or service.<br />
<br />
According to one general aspect, a computer-implemented method may include presenting to a user multiple images, multiple identifiers and a challenge to select one of the identifiers for each of the images, receiving the selected identifiers from the user and providing access to a computing service based on a comparison of the selected identifiers to an answer to the challenge.<br />
<br />
Implementations may include one or more of the following features. For example, the images may be three dimensional models. The images may be randomly textured, three dimensional models. The images may be three dimensional models with each of the three dimensional models set against a separate, randomly generated background. The images may be randomly rotated, three dimensional models. The images may be randomly colored, three dimensional models. The images may be randomly textured, three dimensional models with each of the three dimensional models set against a separate, randomly generated background.<br />
<br />
In one exemplary implementation, at least two times more of the identifiers are presented than the images. In another exemplary implementation, at least three times more of the identifiers are presented than the images.<br />
<br />
Providing access to the computing service may include unlocking a mobile computing device and/or may include serving to the user a web page.<br />
<br />
In another general aspect, a recordable storage medium may include recorded and stored instructions that, when executed, perform the actions of presenting to a user multiple images, multiple identifiers and a challenge to select one of the identifiers for each of the images, receiving the selected identifiers from the user and providing access to a computing service based on a comparison of the selected identifiers to an answer to the challenge.<br />
<br />
Implementations may include one or more of the following features. For example, the images may be three dimensional models. The images may be randomly textured, three dimensional models. The images may be three dimensional models with each of the three dimensional models set against a separate, randomly generated background. The images may be randomly rotated, three dimensional models. The images may be randomly colored, three dimensional models. The images may be randomly textured, three dimensional models with each of the three dimensional models set against a separate, randomly generated background. Providing access to the computing service may include unlocking a mobile computing device and/or serving to the user a web page.<br />
<br />
In another general aspect, a computer-implemented access control system may include one or more servers that are arranged and configured to present to a user multiple images, multiple identifiers and a challenge to select one of the identifiers for each of the images, receive the selected identifiers from the user and provide access to a computing service based on a comparison of the selected identifiers to an answer to the challenge.<br />
<br />
Implementations may include one or more of the following features. For example, the images may be randomly textured, three dimensional models with each of the three dimensional models set against a separate, randomly generated background. The servers may be arranged and configured to provide access to the computing service including serving to the user a web page.<br />
<br />
In another general aspect, a computer-implemented method may include presenting to a user multiple images, multiple identifiers and a challenge to select one of the identifiers for each of the images, receiving the selected identifiers from the user and providing access to an electronic device based on a comparison of the selected identifiers to an answer to the challenge.<br />
<br />
Implementations may include one or more of the following features. For example, the images may be randomly textured, three dimensional models with each of the three dimensional models set against a separate, randomly generated background. Providing access to the electronic device may include unlocking a music device and/or unlocking a game device.<br />
<br />
The details of one or more implementations are set forth in the accompanying drawings and the description below. Other features will be apparent from the description and drawings, and from the claims.<br />
<br />
BRIEF DESCRIPTION OF THE DRAWINGS<br />
<br />
FIGS. 1A-1D show example screen shots of a challenge presented to a user to gain access.<br />
<br />
FIG. 2 is an exemplary block diagram of an illustrative mobile system for limiting access using images and identifier inputs from users.<br />
<br />
FIG. 3 is a flowchart of an example process for limiting access to a device or service.<br />
<br />
FIG. 4 is a swim lane diagram of an example process for granting user access to an online service.<br />
<br />
FIG. 5 shows an example of a computer device and a mobile computer device that can be used to implement the techniques described here.<br />
<br />
Like reference symbols in the various drawings indicate like elements.<br />
<br />
DETAILED DESCRIPTION<br />
<br />
This document describes systems and techniques for mediating access to computing services, which throughout this document includes mediating access to computing devices, electronic devices (e.g., music devices, game devices, etc.) and mediating access to computing services (e.g., online computing services including websites and web pages). Such techniques may include displaying one or more images and multiple identifiers. The user may then be challenged and/or prompted to select one of the presented identifiers for each of the images. If the user properly selects the correct identifier for each of the images, the user may be provided access to a device or service.<br />
<br />
FIGS. 1A-1D show an example screen shot 100, which may be presented to a user. The screen shot 100 may be presented in response to the user seeking access to a device or to a service. For example, the user may navigate to a website using a browser, where the screen shot 100 is presented to the user before the user can enter the website. The screen shot 100 also may be presented to a user seeking to unlock a device such as after a period of inactivity or after the device goes from a sleep mode to an active mode.<br />
<br />
The screen shot 100 includes a challenge to the user that the user is required to answer correctly in order to gain access. In the figures, screen shot 100 includes multiple images 102a-102c, multiple identifiers 104 and a submit button 106. The images 102a-102c may be randomly generated and presented to the user in the screen shot 100. To gain access, the user is challenged to select the appropriate identifier from the list of identifiers 104 for each of the images 102a-102c and to submit the selections using the submit button 106. For example, instructions may be provided to the user telling the user that access may be granted by correctly labelling each of the images 102a-102c with one of the provided identifiers 104. If the user selects the correct identifier for each of the images 102a-102c, then access is granted. If the user does not select the correct identifier for each of the images 102a-102c, then access is denied.<br />
<br />
In FIG. 1A, the screen shot 100 is provided to the user including a challenge to label each of the images 102a-102c with the correct identifier from the provided identifiers 104. Each of the images 102a-102c is displayed as being "unanswered" meaning that an identifier has not been selected for any of the images 102a-102c. The user may select an identifier for an image in different ways. For instance, the user may select one of the images such as image 102a and then select an identifier from the provided list of identifiers 104. The selected identifier may be displayed with the image in place of "unanswered." The user may change a selected identifier for an image simply by selecting another identifier while the image is highlighted. As the user selects an image, the instructions provided to the user may change. In FIG. 1A, if the user selects image 102a, the instructions in the screen shot 100 state "Please identify image 1." As, the user selects the other images 102b and 102c, the instructions may change accordingly.<br />
<br />
FIG. 1B illustrates the screen shot 100 where the user has selected image 102a and selected the identifier "Boat" from the list of identifiers 104 for the image 102a. The identifier is now displayed below the image 102a. The images 102a-102c and the identifiers 104 may be selected using a touch screen, a mouse, a keyboard and/or other types of methods to select objects displayed on a screen. Although the identifiers 104 are illustrated as a list next to the images 102a-102c, this illustrates merely one exemplary implementation. Other implementations may be used to present the identifiers 104 to the user. For instance, the identifiers 104 may be presented to the user in a drop down menu. Also, the identifiers may be presented below each of the images 102a-102c in a drop down menu or other presentation mechanism including, for example, in a pop-up window.<br />
<br />
In FIG. 1B, the remaining two images 102b and 102c are "unanswered." When the user highlights or otherwise selects image 102b, the instructions in the screen shot 100 may change to state "Please identify image 2." FIG. 1C illustrates the screen shot 100 where the user has selected the image 102b and selected the identifier "Animal" from the list of identifiers 104 for the image 102b. The identifier is now displayed below the image 102b. Although, the selected identifier is displayed below the image in this example, the selected identifier for an image may be indicated in other exemplary manners. The remaining image 102c is "unanswered." When the user highlights or otherwise selects image 102c, the instruction in the screen shot may change to state "Please identify image 3." The instructions as presented to the user in this example are merely exemplary and other forms or manners of presenting instructions to the user may be implemented.<br />
<br />
FIG. 1D illustrates the screen shot 100 where the user has selected the image 102c and selected the identifier "Teapot" from the list of identifiers 104 for the image 102c. The selected identifier is now displayed below the image 102c. When the user has selected an identifier for each of the images 102a-102c, the instructions may tell the user to "Please submit" in order to have the selected identifiers submitted for a comparison against the correct identifiers.<br />
<br />
In one exemplary implementation, the submit button 106 may be grayed-out or not selectable until the user has selected an identifier for each of the images 102a-102c. In other exemplary implementations, the submit button 106 may be selectable at any time. The selection of the submit button 106 by the user may cause the selected identifiers to be submitted for a comparison against the correct identifiers. For example, if the screen shot 100 is presented to a user attempting to unlock a device, then selection of the submit button 106 may cause the selected identifiers to be compared against the correct identifiers for this particular challenge, where the comparison of the selected identifiers against the correct identifiers may be performed by a module in the device. If the comparison is a match, then the device is unlocked. If the comparison is not a match, the device is not unlocked. The user may be given one or more additional chances to unlock the device either with the same challenge or with a different randomly generated challenge. After a configurable number of unsuccessful attempts, the device may be locked on a more permanent basis. The use of such a system may be used to enable humans to access the device, but to prevent automated computer systems from accessing the device, especially devices that are capable of communicating with wired and/or wireless networks. The use of such a system also may be used to prevent accidental activation or use of the device when such use of the device is not intended by the user, such as when the device is in the user's pocket or other device holder.<br />
<br />
Similarly, if the screen shot 100 is presented to a user attempting to access an online service such as, for example, attempting to access a website, then selection of the submit button 106 may cause the selected identifiers to be communicated to an access server. The comparison of the selected identifiers to the correct identifiers may be performed by the access server. If the comparison is a match, then access is granted to the website. If the comparison is not a match, then access is denied. The use of such a system may be used to enable humans to access the website, but to prevent automated computer systems from accessing the website because the automated systems may not be able to recognize the images and to select to correct identifier for each of the images.<br />
<br />
In these example figures, the user is presented with more identifiers to select from than there are images presented. In one exemplary implementation, the user may be presented with at least twice as many identifiers to select from than there are images presented. In another exemplary implementation, the user may be presented with at least as three times as many identifiers to select from than there are images presented. The more identifiers that are presented in relation to the number of images, the lower the probability that a human or an automated computing system would randomly guess the correct identifier for each of the images.<br />
<br />
In one exemplary implementation, the images presented to the user may be computer-generated three dimensional (3D) models. For example, the images 102a-102c may be computer-generated 3D models of different objects, namely, a boat, an animal and a teapot. The use of 3D models may make it more difficult for automated computing systems to determine the identity of the image. Additionally, the same 3D models may be presented to the user with many different variations to the 3D model. For instance, the 3D model may be stylistically rendered and presented to include different colors, textures, and/or shading styles. The 3D models also may be randomly rotated such that they can be presented in various different orientations. The 3D models also may be presented against various different backgrounds. For example, each of the images 102a-102c may be presented against a different background.<br />
<br />
The different variations may be applied to a 3D model individually or collectively in different combinations. For instance, the image 102b of the giraffe may be rotated and the giraffe object may be textured in something other than giraffe spots such as, for example, fur or bumps or any of many other types of textures. When these techniques are used to unlock a device, the device may randomly generate the 3D models with the different potential variations for presentation to the user. When these techniques are used to access a computing service, a server or other computing device that is remote from the user may randomly generate the 3D models with the different potential variations for presentation to the user.<br />
<br />
In the above example, having the user select the correct identifier for each of the images to unlock the device may prevent the user from accidentally hitting buttons (e.g., when the device is in the user's pocket). Also, this makes it more difficult for remote hackers, especially automated machines, to access the device using guesses and other brute force-type techniques.<br />
<br />
In one exemplary implementation, the images 102a-102c may be presented as a single composite image with the images 102a-102c being objects within the single composite image instead of the images 102a-102c being presented as multiple independent images. For example, the images 102a-102c may be presented left-to-right as objects within the single composite image. In another example, the images 102a-102c may be presented top-to-bottom as objects within the single composite image. The user may be challenged to select the proper identifier from the provided identifiers for each of the objects within the single composite image in the different manners described above.<br />
<br />
The above techniques also may be used in combination with other security techniques such as, for example, passwords and/or biometrics to provide additional security to gain access.<br />
<br />
FIG. 2 is an exemplary block diagram of an illustrative mobile system 200 for limiting device access using images and identifier inputs from users. The system includes, in the main, a mobile computing device 202, such as, for example, a smart phone or personal digital assistant (PDA), to which access can be granted, or that may mediate access to assets from remote servers or other computers, such as access to Internet web sites access to features and services on Internet web sites.<br />
<br />
The device 202 can interact graphically using a graphical user interface (GUI) on a display 204 that may show representations of various images to a user and that may receive input from the user. In one example, the display 204 is a touch screen display, so that a user may directly press upon images to manipulate them on the display 204 and to select the correct identifier for each of the images from the provided identifiers. Input to the device may also be provided using a trackball 206 and a keyboard 207 on the device 202. The keyboard 207 may be a hard keyboard with physical keys, a soft keyboard that is essentially a touch screen keyboard, or a combination of both.<br />
<br />
A display manager 208 is provided to supervise and coordinate information to be shown on the display 204. The display manager 208, for example, may be provided with data relating to information to be displayed and may coordinate data received from various different applications or modules. As one example, display manager 208 may receive data for overlapping windows on a windowed display and may determine which window is to be on top and where the lower window or windows is to be cut.<br />
<br />
Device inputs such as presses on the touch screen 204 may be processed by an input manager 212. For example, the input manager 212 may receive information regarding input provided by a user on touch screen 204, and may forward such information to various applications or modules. For example, the input manager 212 may cooperate with the display manager 208 so as to understand what onscreen elements a user is selecting when they press on the touch screen 204.<br />
<br />
The device 202 may include a processor 216 that executes instructions stored in memory 217, including instructions provided by a variety of applications 214 stored on the device 202. The processor 216 may comprise multiple processors responsible for coordinating interactions among other device components and communications over an I/O interface 219. The processor 216 also may be responsible for managing internal alerts generated by the device 202. For example, the processor 216 may be alerted by the input manager 212 (which may operate on the processor) when a user touches the display 204 so as to take the device 202 out of a sleep mode state. Such an input may cause the processor 216 to present images and identifiers to the user for the user to select and submit the correct identifier for each of the images in order to provide access to the device 202 or various services, as explained above and below. In one exemplary implementation, the input may cause the processor 216 to generate the images as 3D models for presentation to the user along with multiple identifiers. Also, the processor 216 may generate the variations such as, for example, color, shading, textures, different backgrounds and/or rotations, and randomly apply the variations to the 3D models or non-3D images for presentation to the user on the display 204.<br />
<br />
The processor 216 may perform such functions in cooperation with a device access manager 210. The device access manager 210 may execute code to gather images from the access images memory 222, to gather the identifiers, and to present the images and identifiers to a user of the device 202. The device access manager 210 may display the images in a manner that permits user manipulation of the images and the identifiers, may test user selected identifiers, and may provide an indication that access should be granted or denied. The device access manager 210 also may execute code to apply randomly the different variations to the images such as, for example, color, shading, textures, backgrounds and/or rotations for presentation to the user on the display 204. In one exemplary implementation, the device access manager 210 may execute code to use a lapped textures technique to select a texture sample and apply it to a 3D model such that the 3D model is textured and the textured 3D model is presented to the user.<br />
<br />
The device also includes memory 220, 222 storing various data. The memory 220, 222 may comprise random access memory where computer instructions and data are stored in a volatile memory device for execution by the processor 216. The memory 220, 222 may also include read-only memory where invariant low-level systems code or data for basic system functions such as basic input and output, and startup instructions reside. In addition, the memory 220, 222 may include other suitable types of memory such as programmable read-only memory, erasable programmable read-only memory, electrically erasable programmable read-only memory, hard disks, and removable memory such as microSD cards or Flash memory.<br />
<br />
The memory 220, 222 may, in one example, include user data memory 220, which may story various parameters describing preferences for a user of the device 202. The user data memory 220 may, for example, store and provide ordinary user pass codes, user identifying information (e.g., name, address, telephone numbers, and e-mail addresses), and other such information. Separately or together, access images memory 222 may store images and identifiers used to access the device 202 or various web pages. The access images memory also may store information needed to generate the different variations to be applied to the images, such as the 3D models. In one exemplary implementation, the access images memory 222 may store multiple individual images from which the device access manager 210 may select for presentation on the display 204. In another exemplary implementation, the access images memory 222 may store multiple single composite images from which the device access manager 210 may select for presentation on the display 204. The single composite images may include multiple images that are objects within the single composite image, where the objects may be arranged in various different manners (e.g., right-to-left, top-to-bottom, etc.).<br />
<br />
The device 202 may communicate with other devices or a network through a wireless interface 218. The wireless interface 218 may provide for communication by the device 202 with messaging services such as text messaging, e-mail, and telephone voice mail messaging. In addition, the wireless interface 218 may support downloads and uploads of content and computer code over a wireless network. The wireless interface 218 may additionally provide for voice communications in a wireless network in a familiar manner. As one example, the wireless interface 218 may be used to interact with internet web pages that are to be displayed on display 204, and to submit orientation information to a server or servers remote from the device 202.<br />
<br />
FIG. 3 is a flowchart of an example process 300 for limiting access to a device or a computing service. In general, the process 300 involves presenting images and identifiers to user and determining whether the user can select the correct identifier for each of the images from the provided identifiers, and to thus conclude that the user is a human who should be granted access to the device or service.<br />
<br />
Process 300 may include presenting to a user multiple images, multiple identifiers and a challenge to select one of the identifiers for each of the images (302). For example, as discussed above in FIGS. 1A-1D, images 102a-102c and identifiers 104 may be presented to the user. The challenge may be implicit in that the images are initially presented as being "unanswered" as illustrated in FIG. 1A. The challenge also may be explicit in that, for example, instructions are presented to the user to identify each of the images and to submit the identifiers. For example, FIGS. 1A-1D illustrate exemplary instructions that may be provided to the user in the screen shot 100.<br />
<br />
As discussed above, the images presented to the user may include 3D models that may be generated in response to a request for access. In one exemplary implementation, to provide access to a computing service, a server on a network may generate the 3D models for presentation to the user. In another exemplary implementation, to provide access to a device or to a service, a module on the device (e.g., device access manager 210 of FIG. 2) may generate the 3D models for presentation to the user.<br />
<br />
The images presented to the user may include many variations on the same images. For example, if the images are 3D models, the same 3D models may be randomly colored, shaded, textured, rotated and/or set against different random backgrounds so as to make it more difficult for a non-human to determine the proper identifier for the image. Also, by using different variations of the same 3D model, a smaller corpus of 3D models may be used and yet still achieve many, many different variations.<br />
<br />
Process 300 also includes receiving the selected identifiers from the user (304). For example, the selected identifiers may be communicated to a module within a device or the selected identifiers may be communicated to a server on a network. The selected identifiers are received and a comparison is made to determine if the selected identifiers match an answer to the challenge (306). The answer to the challenge may be the correct identifiers for each of the presented images. If the selected identifiers do not match the answer, then access is denied (308). If the selected identifiers match the answer, then access is provided (310).<br />
<br />
FIG. 4 is a swim lane diagram of an example process 400 for granting user access to a web page and/or to an online service. A client may request access to a web page and/or to an online service (401). A request for access by a client may be received at an access server (402). The access server may request and retrieve multiple images and identifiers from an image repository (404). For example, the images (e.g., 3D models) may be stored on a storage medium as part of an image repository. The images may be stored along with metadata, which may further describe or include additional information regarding the image. The respective identifiers may be stored along with the images and/or the identifiers may be a part of the metadata about each image.<br />
<br />
In one exemplary implementation, the image repository may store multiple individual images from which access server may select for presentation to the client. In another exemplary implementation, the image repository may store multiple single composite images from which the access server may select for presentation to the client. The single composite images may include multiple images that are objects within the single composite image, where the objects may be arranged in various different manners (e.g., right-to-left, top-to-bottom, etc.).<br />
<br />
The access server may be configured to generate and to apply one or more variations to the retrieved images (406). For example, if the images are 3D models, the access server may randomly apply a color to one or more of the images. Also, the access server may randomly apply a texture to one or more of the images. In one exemplary implementation, the access server may use a lapped texture technique to apply a texture to the 3D model. Also, the access server may set the images against different backgrounds, shade the images and/or rotate the images in different orientations. Although, the variations may be applied to each of the images, the identifier for the image remains the same. For example, although a 3D model of a giraffe may be colored red and textured with fur, the identifier for the 3D model is still "giraffe." A human being viewing the colored and textured giraffe will be able to perceive that the 3D model is a giraffe and that the correct identifier is a giraffe; however, an automated computing system may have a difficult time determining that the 3D model is a giraffe, especially if the automated computing system is using standard giraffe characteristics to make this guess.<br />
<br />
The access server may be configured to present the images and the identifiers along with a challenge to the client that requested access (408). The client may receive and display the images and the identifiers (410). The client may receive selected identifiers from a user for each of the images (412) and may submit the selected identifiers to the access server (414).<br />
<br />
The access server may receive the selected identifiers from the client (416) and may compare the selected identifiers to the corrected identifiers for the images that were presented to the client (418). The access server may maintain a table in memory of the answer to the challenge that was presented to the user. For instance, the access server may maintain a table that tracks the images and/or identifiers that were served to a particular client such that when the selected identifiers are received, the selected identifiers may be compared against the identifiers in the table.<br />
<br />
If the selected identifiers match, then the access server may grant access and redirect the client's browser to the appropriate web page in the website or to the appropriate online service, as the case may be (420). The web page(s) corresponding to the secure portion of the website may be displayed on the client browser (422).<br />
<br />
FIG. 5 shows an example of a generic computer device 500 and a generic mobile computer device 550, which may be used with the techniques described here. Computing device 500 is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Computing device 550 is intended to represent various forms of mobile devices, such as personal digital assistants, cellular telephones, smart phones, and other similar computing devices. The components shown here, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed in this document.<br />
<br />
Computing device 500 includes a processor 502, memory 504, a storage device 506, a high-speed interface 508 connecting to memory 504 and high-speed expansion ports 510, and a low speed interface 512 connecting to low speed bus 514 and storage device 506. Each of the components 502, 504, 506, 508, 510, and 512, are interconnected using various busses, and may be mounted on a common motherboard or in other manners as appropriate. The processor 502 can process instructions for execution within the computing device 500, including instructions stored in the memory 504 or on the storage device 506 to display graphical information for a GUI on an external input/output device, such as display 516 coupled to high speed interface 508. In other implementations, multiple processors and/or multiple buses may be used, as appropriate, along with multiple memories and types of memory. Also, multiple computing devices 500 may be connected, with each device providing portions of the necessary operations (e.g., as a server bank, a group of blade servers, or a multi-processor system).<br />
<br />
The memory 504 stores information within the computing device 500. In one implementation, the memory 504 is a volatile memory unit or units. In another implementation, the memory 504 is a non-volatile memory unit or units. The memory 504 may also be another form of computer-readable medium, such as a magnetic or optical disk.<br />
<br />
The storage device 506 is capable of providing mass storage for the computing device 500. In one implementation, the storage device 506 may be or contain a computer-readable medium, such as a floppy disk device, a hard disk device, an optical disk device, or a tape device, a flash memory or other similar solid state memory device, or an array of devices, including devices in a storage area network or other configurations. A computer program product can be tangibly embodied in an information carrier. The computer program product may also contain instructions that, when executed, perform one or more methods, such as those described above. The information carrier is a computer- or machine-readable medium, such as the memory 504, the storage device 506, or memory on processor 502.<br />
<br />
The high speed controller 508 manages bandwidth-intensive operations for the computing device 500, while the low speed controller 512 manages lower bandwidth-intensive operations. Such allocation of functions is exemplary only. In one implementation, the high-speed controller 508 is coupled to memory 504, display 516 (e.g., through a graphics processor or accelerator), and to high-speed expansion ports 510, which may accept various expansion cards (not shown). In the implementation, low-speed controller 512 is coupled to storage device 506 and low-speed expansion port 514. The low-speed expansion port, which may include various communication ports (e.g., USB, Bluetooth, Ethernet, wireless Ethernet) may be coupled to one or more input/output devices, such as a keyboard, a pointing device, a scanner, or a networking device such as a switch or router, e.g., through a network adapter.<br />
<br />
The computing device 500 may be implemented in a number of different forms, as shown in the figure. For example, it may be implemented as a standard server 520, or multiple times in a group of such servers. It may also be implemented as part of a rack server system 524. In addition, it may be implemented in a personal computer such as a laptop computer 522. Alternatively, components from computing device 500 may be combined with other components in a mobile device (not shown), such as device 550. Each of such devices may contain one or more of computing device 500, 550, and an entire system may be made up of multiple computing devices 500, 550 communicating with each other.<br />
<br />
Computing device 550 includes a processor 552, memory 564, an input/output device such as a display 554, a communication interface 566, and a transceiver 568, among other components. The device 550 may also be provided with a storage device, such as a microdrive or other device, to provide additional storage. Each of the components 550, 552, 564, 554, 566, and 568, are interconnected using various buses, and several of the components may be mounted on a common motherboard or in other manners as appropriate.<br />
<br />
The processor 552 can execute instructions within the computing device 550, including instructions stored in the memory 564. The processor may be implemented as a chipset of chips that include separate and multiple analog and digital processors. The processor may provide, for example, for coordination of the other components of the device 550, such as control of user interfaces, applications run by device 550, and wireless communication by device 550.<br />
<br />
Processor 552 may communicate with a user through control interface 558 and display interface 556 coupled to a display 554. The display 554 may be, for example, a TFT LCD (Thin-Film-Transistor Liquid Crystal Display) or an OLED (Organic Light Emitting Diode) display, or other appropriate display technology. The display interface 556 may comprise appropriate circuitry for driving the display 554 to present graphical and other information to a user. The control interface 558 may receive commands from a user and convert them for submission to the processor 552. In addition, an external interface 562 may be provide in communication with processor 552, so as to enable near area communication of device 550 with other devices. External interface 562 may provide, for example, for wired communication in some implementations, or for wireless communication in other implementations, and multiple interfaces may also be used.<br />
<br />
The memory 564 stores information within the computing device 550. The memory 564 can be implemented as one or more of a computer-readable medium or media, a volatile memory unit or units, or a non-volatile memory unit or units. Expansion memory 574 may also be provided and connected to device 550 through expansion interface 572, which may include, for example, a SIMM (Single In Line Memory Module) card interface. Such expansion memory 574 may provide extra storage space for device 550, or may also store applications or other information for device 550. Specifically, expansion memory 574 may include instructions to carry out or supplement the processes described above, and may include secure information also. Thus, for example, expansion memory 574 may be provide as a security module for device 550, and may be programmed with instructions that permit secure use of device 550. In addition, secure applications may be provided via the SIMM cards, along with additional information, such as placing identifying information on the SIMM card in a non-hackable manner.<br />
<br />
The memory may include, for example, flash memory and/or NVRAM memory, as discussed below. In one implementation, a computer program product is tangibly embodied in an information carrier. The computer program product contains instructions that, when executed, perform one or more methods, such as those described above. The information carrier is a computer- or machine-readable medium, such as the memory 564, expansion memory 574, or memory on processor 552, that may be received, for example, over transceiver 568 or external interface 562.<br />
<br />
Device 550 may communicate wirelessly through communication interface 566, which may include digital signal processing circuitry where necessary. Communication interface 566 may provide for communications under various modes or protocols, such as GSM voice calls, SMS, EMS, or MMS messaging, CDMA, TDMA, PDC, WCDMA, CDMA2000, or GPRS, among others. Such communication may occur, for example, through radio-frequency transceiver 568. In addition, short-range communication may occur, such as using a Bluetooth, WiFi, or other such transceiver (not shown). In addition, GPS (Global Positioning System) receiver module 570 may provide additional navigation- and location-related wireless data to device 550, which may be used as appropriate by applications running on device 550.<br />
<br />
Device 550 may also communicate audibly using audio codec 560, which may receive spoken information from a user and convert it to usable digital information. Audio codec 560 may likewise generate audible sound for a user, such as through a speaker, e.g., in a handset of device 550. Such sound may include sound from voice telephone calls, may include recorded sound (e.g., voice messages, music files, etc.) and may also include sound generated by applications operating on device 550.<br />
<br />
The computing device 550 may be implemented in a number of different forms, as shown in the figure. For example, it may be implemented as a cellular telephone 580. It may also be implemented as part of a smart phone 582, personal digital assistant, or other similar mobile device.<br />
<br />
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, specially designed ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various implementations can include implementation in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device.<br />
<br />
These computer programs (also known as programs, software, software applications or code) include machine instructions for a programmable processor, and can be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the terms "machine-readable medium" "computer-readable medium" refers to any computer program product, apparatus and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.<br />
<br />
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to the user and a keyboard and a pointing device (e.g., a mouse or a trackball) by which the user can provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user can be received in any form, including acoustic, speech, or tactile input.<br />
<br />
The systems and techniques described here can be implemented in a computing system that includes a back end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front end component (e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back end, middleware, or front end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), and the Internet.<br />
<br />
The computing system can include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.<br />
<br />
A number of embodiments have been described. Nevertheless, it will be understood that various modifications may be made without departing from the spirit and scope of the invention.<br />
<br />
In addition, the logic flows depicted in the figures do not require the particular order shown, or sequential order, to achieve desirable results. In addition, other steps may be provided, or steps may be eliminated, from the described flows, and other components may be added to, or removed from, the described systems. Accordingly, other embodiments are within the scope of the following claims.<br />
<br />
<br />
<center><b>* * * * *</b></center></div>Kevin Andrew Woolseyhttp://www.blogger.com/profile/01268449682429697653noreply@blogger.com0tag:blogger.com,1999:blog-3776716555337472667.post-62814304395547017102012-06-03T22:52:00.000-07:002012-06-03T22:52:10.952-07:00Real-time bookmarking of streaming media assets<div dir="ltr" style="text-align: left;" trbidi="on">
<br />
<center><br /><hr style="text-align: -webkit-auto;" />
</center><table><tbody>
<tr><td align="LEFT" width="50%"><b>United States Patent</b></td><td align="RIGHT" width="50%"><b><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO1&Sect2=HITOFF&d=PALL&p=1&u=%2Fnetahtml%2FPTO%2Fsrchnum.htm&r=1&f=G&l=50&s1=8,191,103.PN.&OS=PN/8,191,103&RS=PN/8,191,103#h0" name="h1"></a><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO1&Sect2=HITOFF&d=PALL&p=1&u=%2Fnetahtml%2FPTO%2Fsrchnum.htm&r=1&f=G&l=50&s1=8,191,103.PN.&OS=PN/8,191,103&RS=PN/8,191,103#h2"></a><b><i></i></b>8,191,103</b></td></tr>
<tr><td align="LEFT" width="50%"><b>Hofrichter , et al.</b></td><td align="RIGHT" width="50%"><b>May 29, 2012</b></td></tr>
</tbody></table>
<hr style="text-align: -webkit-auto;" />
<span style="text-align: -webkit-auto;">Real-time bookmarking of streaming media assets </span><br style="text-align: -webkit-auto;" /><br style="text-align: -webkit-auto;" /><center><b>Abstract</b></center><div style="text-align: -webkit-auto;">
A method for real-time bookmarking of streaming media assets is disclosed. In one embodiment, the method includes dynamically changing a presentation segment of a plurality of segments based one or more bookmark signals from a viewer.</div>
<hr style="text-align: -webkit-auto;" />
<table><tbody>
<tr><td align="LEFT" valign="TOP" width="10%">Inventors:</td><td align="LEFT" width="90%"><b>Hofrichter; Klaus</b> (Santa Clara, CA)<b>, Rafey; Richter A.</b> (Santa Clara, CA)</td></tr>
<tr><td align="LEFT" valign="TOP" width="10%">Assignee:</td><td align="LEFT" width="90%"><b>Sony Corporation</b> (Tokyo, <b>JP</b>)<br /><b>Sony Electronics Inc.</b> (Park Ridge, NJ) </td></tr>
<tr><td align="LEFT" nowrap="" valign="TOP" width="10%">Appl. No.:</td><td align="LEFT" width="90%"><b>11/031,842</b></td></tr>
<tr><td align="LEFT" valign="TOP" width="10%">Filed:</td><td align="LEFT" width="90%"><b>January 6, 2005</b></td></tr>
</tbody></table>
<hr style="text-align: -webkit-auto;" />
<center><b>Related U.S. Patent Documents</b></center><hr style="text-align: -webkit-auto;" />
<td< style="text-align: -webkit-auto;" td=""></td<><td< style="text-align: -webkit-auto;" td=""></td<><table><tbody>
<tr><td width="7%"></td><td></td><td></td><td></td><td></td><td></td></tr>
<tr><td align="left"></td><td align="center"><b><u>Application Number</u></b></td><td align="center"><b><u>Filing Date</u></b></td><td align="center"><b><u>Patent Number</u></b></td><td align="center"><b><u>Issue Date</u></b></td></tr>
<tr><td align="center"></td><td align="center">09651433</td><td align="center">Aug., 2000</td><td align="center"></td><td align="center"></td></tr>
<tr><td align="center"></td></tr>
</tbody></table>
<hr style="text-align: -webkit-auto;" />
<div style="text-align: -webkit-auto;">
</div>
<table><tbody>
<tr><td align="LEFT" valign="TOP" width="40%"><b>Current U.S. Class:</b></td><td align="RIGHT" valign="TOP" width="80%"><b>725/142</b> ; 725/131; 725/134; 725/139</td></tr>
<tr><td align="LEFT" valign="TOP" width="40%"><b>Current International Class:</b></td><td align="RIGHT" valign="TOP" width="80%">H04N 7/16 (20110101)</td></tr>
<tr><td align="LEFT" valign="TOP" width="40%"><b>Field of Search:</b></td><td align="RIGHT" valign="TOP" width="80%">725/87,142,139,134</td></tr>
</tbody></table>
<br />
<hr style="text-align: -webkit-auto;" />
<center><b>References Cited <a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2Fsearch-adv.htm&r=0&f=S&l=50&d=PALL&Query=ref/8191103">[Referenced By]</a></b></center><hr style="text-align: -webkit-auto;" />
<center><b>U.S. Patent Documents</b></center><table><tbody>
<tr><td width="33%"></td><td width="33%"></td><td width="34%"></td></tr>
<tr><td align="left"><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&d=PALL&RefSrch=yes&Query=PN%2F4745549">4745549</a></td><td align="left">May 1988</td><td align="left">Hashimoto</td></tr>
<tr><td align="left"><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&d=PALL&RefSrch=yes&Query=PN%2F4775935">4775935</a></td><td align="left">October 1988</td><td align="left">Yourick</td></tr>
<tr><td align="left"><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&d=PALL&RefSrch=yes&Query=PN%2F4965825">4965825</a></td><td align="left">October 1990</td><td align="left">Harvey et al.</td></tr>
<tr><td align="left"><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&d=PALL&RefSrch=yes&Query=PN%2F5223924">5223924</a></td><td align="left">June 1993</td><td align="left">Strubbe</td></tr>
<tr><td align="left"><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&d=PALL&RefSrch=yes&Query=PN%2F5231494">5231494</a></td><td align="left">July 1993</td><td align="left">Wachob</td></tr>
<tr><td align="left"><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&d=PALL&RefSrch=yes&Query=PN%2F5353121">5353121</a></td><td align="left">October 1994</td><td align="left">Young et al.</td></tr>
<tr><td align="left"><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&d=PALL&RefSrch=yes&Query=PN%2F5371551">5371551</a></td><td align="left">December 1994</td><td align="left">Logan et al.</td></tr>
<tr><td align="left"><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&d=PALL&RefSrch=yes&Query=PN%2F5481296">5481296</a></td><td align="left">January 1996</td><td align="left">Cragun et al.</td></tr>
<tr><td align="left"><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&d=PALL&RefSrch=yes&Query=PN%2F5534911">5534911</a></td><td align="left">July 1996</td><td align="left">Levitan</td></tr>
<tr><td align="left"><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&d=PALL&RefSrch=yes&Query=PN%2F5553281">5553281</a></td><td align="left">September 1996</td><td align="left">Brown et al.</td></tr>
<tr><td align="left"><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&d=PALL&RefSrch=yes&Query=PN%2F5614940">5614940</a></td><td align="left">March 1997</td><td align="left">Cobbley et al.</td></tr>
<tr><td align="left"><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&d=PALL&RefSrch=yes&Query=PN%2F5619249">5619249</a></td><td align="left">April 1997</td><td align="left">Billock et al.</td></tr>
<tr><td align="left"><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&d=PALL&RefSrch=yes&Query=PN%2F5625464">5625464</a></td><td align="left">April 1997</td><td align="left">Compoint et al.</td></tr>
<tr><td align="left"><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&d=PALL&RefSrch=yes&Query=PN%2F5635979">5635979</a></td><td align="left">June 1997</td><td align="left">Kostreski et al.</td></tr>
<tr><td align="left"><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&d=PALL&RefSrch=yes&Query=PN%2F5638443">5638443</a></td><td align="left">June 1997</td><td align="left">Stefik et al.</td></tr>
<tr><td align="left"><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&d=PALL&RefSrch=yes&Query=PN%2F5699107">5699107</a></td><td align="left">December 1997</td><td align="left">Lawler et al.</td></tr>
<tr><td align="left"><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&d=PALL&RefSrch=yes&Query=PN%2F5740549">5740549</a></td><td align="left">April 1998</td><td align="left">Reilly et al.</td></tr>
<tr><td align="left"><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&d=PALL&RefSrch=yes&Query=PN%2F5758257">5758257</a></td><td align="left">May 1998</td><td align="left">Herz et al.</td></tr>
<tr><td align="left"><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&d=PALL&RefSrch=yes&Query=PN%2F5758259">5758259</a></td><td align="left">May 1998</td><td align="left">Lawler</td></tr>
<tr><td align="left"><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&d=PALL&RefSrch=yes&Query=PN%2F5797010">5797010</a></td><td align="left">August 1998</td><td align="left">Brown</td></tr>
<tr><td align="left"><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&d=PALL&RefSrch=yes&Query=PN%2F5826102">5826102</a></td><td align="left">October 1998</td><td align="left">Escobar et al.</td></tr>
<tr><td align="left"><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&d=PALL&RefSrch=yes&Query=PN%2F5852435">5852435</a></td><td align="left">December 1998</td><td align="left">Vigneaux et al.</td></tr>
<tr><td align="left"><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&d=PALL&RefSrch=yes&Query=PN%2F5861906">5861906</a></td><td align="left">January 1999</td><td align="left">Dunn et al.</td></tr>
<tr><td align="left"><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&d=PALL&RefSrch=yes&Query=PN%2F5884056">5884056</a></td><td align="left">March 1999</td><td align="left">Steele</td></tr>
<tr><td align="left"><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&d=PALL&RefSrch=yes&Query=PN%2F5900905">5900905</a></td><td align="left">May 1999</td><td align="left">Shoff et al.</td></tr>
<tr><td align="left"><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&d=PALL&RefSrch=yes&Query=PN%2F6029045">6029045</a></td><td align="left">February 2000</td><td align="left">Picco et al.</td></tr>
<tr><td align="left"><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&d=PALL&RefSrch=yes&Query=PN%2F6064380">6064380</a></td><td align="left">May 2000</td><td align="left">Swenson et al.</td></tr>
<tr><td align="left"><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&d=PALL&RefSrch=yes&Query=PN%2F6084581">6084581</a></td><td align="left">July 2000</td><td align="left">Hunt</td></tr>
<tr><td align="left"><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&d=PALL&RefSrch=yes&Query=PN%2F6144375">6144375</a></td><td align="left">November 2000</td><td align="left">Jain et al.</td></tr>
<tr><td align="left"><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&d=PALL&RefSrch=yes&Query=PN%2F6160570">6160570</a></td><td align="left">December 2000</td><td align="left">Sitnik</td></tr>
<tr><td align="left"><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&d=PALL&RefSrch=yes&Query=PN%2F6236395">6236395</a></td><td align="left">May 2001</td><td align="left">Sezan et al.</td></tr>
<tr><td align="left"><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&d=PALL&RefSrch=yes&Query=PN%2F6243725">6243725</a></td><td align="left">June 2001</td><td align="left">Hempleman et al.</td></tr>
<tr><td align="left"><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&d=PALL&RefSrch=yes&Query=PN%2F6269369">6269369</a></td><td align="left">July 2001</td><td align="left">Robertson</td></tr>
<tr><td align="left"><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&d=PALL&RefSrch=yes&Query=PN%2F6289346">6289346</a></td><td align="left">September 2001</td><td align="left">Milewski et al.</td></tr>
<tr><td align="left"><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&d=PALL&RefSrch=yes&Query=PN%2F6366296">6366296</a></td><td align="left">April 2002</td><td align="left">Boreczky et al.</td></tr>
<tr><td align="left"><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&d=PALL&RefSrch=yes&Query=PN%2F6377861">6377861</a></td><td align="left">April 2002</td><td align="left">York</td></tr>
<tr><td align="left"><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&d=PALL&RefSrch=yes&Query=PN%2F6460036">6460036</a></td><td align="left">October 2002</td><td align="left">Herz</td></tr>
<tr><td align="left"><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&d=PALL&RefSrch=yes&Query=PN%2F6463444">6463444</a></td><td align="left">October 2002</td><td align="left">Jain et al.</td></tr>
<tr><td align="left"><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&d=PALL&RefSrch=yes&Query=PN%2F6483986">6483986</a></td><td align="left">November 2002</td><td align="left">Krapf</td></tr>
<tr><td align="left"><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&d=PALL&RefSrch=yes&Query=PN%2F6574378">6574378</a></td><td align="left">June 2003</td><td align="left">Lim</td></tr>
<tr><td align="left"><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&d=PALL&RefSrch=yes&Query=PN%2F6848002">6848002</a></td><td align="left">January 2005</td><td align="left">Detlef</td></tr>
<tr><td align="left"><a href="http://appft1.uspto.gov/netacgi/nph-Parser?TERM1=20020023230&Sect1=PTO1&Sect2=HITOFF&d=PG01&p=1&u=%2Fnetahtml%2FPTO%2Fsrchnum.html&r=0&f=S&l=50" target="_blank">2002/0023230</a></td><td align="left">February 2002</td><td align="left">Bolnick et al.</td></tr>
<tr><td align="left"><a href="http://appft1.uspto.gov/netacgi/nph-Parser?TERM1=20020170068&Sect1=PTO1&Sect2=HITOFF&d=PG01&p=1&u=%2Fnetahtml%2FPTO%2Fsrchnum.html&r=0&f=S&l=50" target="_blank">2002/0170068</a></td><td align="left">November 2002</td><td align="left">Rafey et al.</td></tr>
<tr><td align="left"><a href="http://appft1.uspto.gov/netacgi/nph-Parser?TERM1=20020194260&Sect1=PTO1&Sect2=HITOFF&d=PG01&p=1&u=%2Fnetahtml%2FPTO%2Fsrchnum.html&r=0&f=S&l=50" target="_blank">2002/0194260</a></td><td align="left">December 2002</td><td align="left">Headley et al.</td></tr>
<tr><td align="left"><a href="http://appft1.uspto.gov/netacgi/nph-Parser?TERM1=20030174861&Sect1=PTO1&Sect2=HITOFF&d=PG01&p=1&u=%2Fnetahtml%2FPTO%2Fsrchnum.html&r=0&f=S&l=50" target="_blank">2003/0174861</a></td><td align="left">September 2003</td><td align="left">Levy et al.</td></tr>
<tr><td align="left"><a href="http://appft1.uspto.gov/netacgi/nph-Parser?TERM1=20060212900&Sect1=PTO1&Sect2=HITOFF&d=PG01&p=1&u=%2Fnetahtml%2FPTO%2Fsrchnum.html&r=0&f=S&l=50" target="_blank">2006/0212900</a></td><td align="left">September 2006</td><td align="left">Ismail et al.</td></tr>
<tr><td align="left"></td></tr>
</tbody></table>
<br style="text-align: -webkit-auto;" /><center><b>Other References</b></center><table><tbody>
<tr><td><align=left><br />"Automatic Construction of Personalized TV News Programs," Association of Computing Machinery (ACM) Multimedia Conf., 323-331 (Presented Nov. 3, 1999). cited by examiner .<br />Electronic House Com, EchoStart Communications Corporation and Geocast Network Systems Align to Deliver New Personalized Interactive Broadband Services to PC Users Via Satellite, Jun. 4, 2002, http://209.6.10.99/news101600echostar.html, 3 pages. cited by other .<br />Lost Remote, The TV Revolution is Coming, Lost Remote TV New Media & Television Convergence News, TV News Gets (too?) Personal by Cory Bergman, Sep. 25, 2000, http://www.lostremote.com/producer/personal.html, 2 pages. cited by other.</align=left></td></tr>
</tbody></table>
<br style="text-align: -webkit-auto;" /><i style="text-align: -webkit-auto;">Primary Examiner:</i><span style="background-color: white; text-align: -webkit-auto;"> Bui; Kieu Oanh T </span><br style="text-align: -webkit-auto;" /><i style="text-align: -webkit-auto;">Assistant Examiner:</i><span style="background-color: white; text-align: -webkit-auto;"> Alcon; Fernando </span><br style="text-align: -webkit-auto;" /><i style="text-align: -webkit-auto;">Attorney, Agent or Firm:</i><span style="background-color: white; text-align: -webkit-auto;"> </span><coma style="text-align: -webkit-auto;">Blakely, Sokoloff, Taylor & Zafman LLP<br /><hr />
<center><b><i>Parent Case Text</i></b></center><hr />
<br /><br />RELATED APPLICATION<br /><br />This application is a continuation application of Ser. No. 09/651,433, filed Aug. 30, 2000 now abandoned.<hr />
<center><b><i>Claims</i></b></center><hr />
<br /><br />What is claimed is:<br /><br />1. A computerized method comprising: receiving, by an on-site media system, a plurality of teasers and a plurality of different media segments from a content provider, the received media segments to be presented in a current presentation order, wherein each of the plurality of teasers is an audio/video teaser; sequentially presenting a video component of each of the plurality of teasers on a local display, wherein the sequential presentation is a temporal sequential presentation; receiving a bookmark signal associated with a presented teaser during the sequential presentation of the video component of the presented teaser, wherein a media segment associated with the presented teaser is marked in response to receiving the bookmark signal; dynamically changing, in response to receiving the bookmark signal, a presentation position of the marked media segment in the current presentation order; and presenting the plurality of different media segments in the changed presentation order, wherein the marked media segment is presented before an unmarked and different media segment is presented and the plurality of media segments is presented subsequent to the presentation of the plurality of teasers.<br /><br />2. The method of claim 1, wherein the bookmark signal marks a media segment as of interest.<br /><br />3. The method of claim 1, wherein the bookmark signal marks a media segment as not of interest.<br /><br />4. The method of claim 3, wherein the changed presentation order comprises not presenting the marked media segment.<br /><br />5. The method of claim 1, wherein receiving the plurality of teasers comprises using a disk/tuner cartridge.<br /><br />6. The method of claim 1, wherein receiving the plurality of media segments comprises using a disk/tuner cartridge.<br /><br />7. The method of claim 1, wherein the teaser is associated with multiple media segments.<br /><br />8. The method of claim 1, wherein multiple teasers are associated with multiple media segments.<br /><br />9. A non-transitory machine readable medium having executable instructions to cause a processor to perform a method comprising: receiving, by an on-site media system, a plurality of teasers and a plurality of different media segments from a content provider, the received media segments to be presented in a current presentation order, wherein each of the plurality of teasers is an audio/video teaser; presenting a video component of each of the plurality of teasers on a local display, wherein the sequential presentation is a temporal sequential presentation; receiving a bookmark signal associated with a presented teaser during the sequential presentation of the video component of the presented teaser, wherein a media segment associated with the presented teaser is marked in response to receiving the bookmark signal; dynamically changing, in response to receiving the bookmark signal, a presentation position of the marked media segment in the current presentation order; and presenting the plurality of different media segments in the changed presentation order, wherein the marked media segment is presented before an unmarked and different media segment is presented and the plurality of media segments is presented subsequent to the presentation of the plurality of teasers.<br /><br />10. The non-transitory machine readable medium of claim 9, wherein the bookmark signal marks a media segment as of interest.<br /><br />11. The non-transitory machine readable medium of claim 9, wherein the bookmark signal marks a media segment as not of interest.<br /><br />12. The non-transitory machine readable medium of claim 11, wherein the changed presentation order comprises not presenting the marked media segment.<br /><br />13. The non-transitory machine readable medium of claim 9, wherein receiving the plurality of teasers comprises using a disk/tuner cartridge.<br /><br />14. The non-transitory machine readable medium of claim 9, wherein receiving the plurality of media segments comprises using a disk/tuner cartridge.<br /><br />15. The non-transitory machine readable medium of claim 9, wherein the teaser is associated with multiple media segments.<br /><br />16. The non-transitory machine readable medium of claim 9, wherein multiple teasers are associated with multiple media segments.<br /><br />17. A system comprising: a disk/tuner cartridge to receive, by an on-site media system, a plurality of teasers and a plurality of different media segments from a content provider, the received media segments to be presented in a current presentation order, wherein each of the plurality of teasers is an audio/video teaser; and a processor to sequentially present a video component of each of the plurality of teasers, wherein the sequential presentation is a temporal sequential presentation, wherein the disk-tuner cartridge receives a bookmark signal associated with a presented teaser during the sequential presentation of the video component of the presented teaser, marks a media segment associated with the presented teaser in response to receiving the bookmark signal and dynamically changes, in response to receiving the bookmark signal, a presentation position of the marked media segment in the current presentation order, the plurality of different media segments are presented by the processor in the changed presentation order, wherein the marked media segment is presented before an unmarked and different media segment is presented and the plurality of media segments is presented subsequent to the presentation of the plurality of teasers.<br /><br />18. The system of claim 17, wherein the bookmark signal marks a media segment as of interest.<br /><br />19. The system of claim 17, wherein the bookmark signal marks a media segment as not of interest.<br /><br />20. The system of claim 19, wherein the changed presentation order comprises not presenting the marked media segment.<br /><br />21. The system of claim 17, wherein the teaser is associated with multiple media segments.<br /><br />22. The system of claim 17, wherein multiple teasers are associated with multiple media segments.<br /><br />23. An apparatus comprising: means for receiving a plurality of teasers and a plurality of different media segments, the received media segments to be presented in a current presentation order, wherein each of the plurality of teasers is an audio/video teaser; means for sequentially presenting a video component of each of the plurality of teasers on a local display, wherein the sequential presentation is a temporal sequential presentation; means for receiving a bookmark signal associated with a presented teaser during the sequential presentation of the video component of the presented teaser, wherein a media segment associated with the presented teaser is marked in response to receiving the bookmark signal; means for dynamically changing, in response to receiving the bookmark signal, a presentation position of the marked media segment in the current presentation order; and means for presenting the plurality of different media segments in the changed presentation order, wherein the marked media segment is presented before an unmarked and different media segment is presented and the plurality of media segments is presented subsequent to the presentation of the plurality of teasers.<br /><br />24. The apparatus of claim 23, wherein the bookmark signal marks a media segment as of interest.<br /><br />25. The apparatus of claim 24, wherein the bookmark signal marks a media segment as not of interest.<br /><br />26. The apparatus of claim 25, wherein the changed presentation order comprises not presenting the marked media segment.<br /><br />27. The apparatus of claim 23, wherein the means for receiving the plurality of teasers comprises using a disk/tuner cartridge.<br /><br />28. The apparatus of claim 23, wherein the means for receiving the plurality of media segments comprises using a disk/tuner cartridge.<br /><br />29. The apparatus of claim 23, wherein the teaser is associated with multiple media segments.<br /><br />30. The apparatus of claim 23, wherein multiple teasers are associated with multiple media segments.<hr />
<center><b><i>Description</i></b></center><hr />
<br /><br />FIELD OF INVENTION<br /><br />The invention is related to audio/video storage and multimedia presentation systems.<br /><br />BACKGROUND OF THE INVENTION<br /><br />A multimedia presentation system enables a viewer to select one or more segments to watch by displaying a series of teasers, or short clips, that describe the segments.<br /><br />In some systems, the teasers are presented first, followed by the full stories. The user can interact with the presentation engine to influence the presentation sequence by either jumping to a specific story during the presentation of the respective teaser or by skipping a story to continue with the next story, or another continuation point.<br /><br />The problem with this system is that this system only allows changing the "position-pointer" in an ongoing presentation. There is also no real indexing to the stories. The viewer is unable to setup a presentation sequence dynamically for passive viewing afterwards.<br /><br />SUMMARY OF THE INVENTION<br /><br />A method for real-time bookmarking of streaming media assets is disclosed. In one embodiment, the method includes dynamically changing a presentation sequence of a plurality of video segments based on one or more bookmark signals from a viewer.<br /><br />BRIEF DESCRIPTION OF THE DRAWINGS<br /><br />The present invention is illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar elements, and in which:<br /><br />FIG. 1 shows an embodiment of a method for bookmarking.<br /><br />FIG. 2 is a block diagram of an on-site media system having a dedicated service module.<br /><br />FIG. 3A is a block diagram of data recorded on a dedicated service module.<br /><br />FIG. 3B is a diagram of multiple designs of a dedicated service module.<br /><br />FIG. 4 is a block diagram of another configuration of a dedicated service module.<br /><br />FIG. 5 is a functional block diagram of an interactive media system including content provider and viewer systems with functions.<br /><br />FIG. 6A is a diagram of a fine-grain media stream.<br /><br />FIG. 6B is a television view generated using the interactive media system.<br /><br />DETAILED DESCRIPTION<br /><br />A method for real-time bookmarking of streaming media assets is disclosed. In one embodiment, the method includes dynamically changing the presentation order of a plurality of video segments based on one or more bookmark signals from a viewer.<br /><br />An advantage of this method is that the viewer receives a full overview of the available segment material. It is not necessary for the viewer to revisit the teasers to access other segment content of interest. The viewer can easily and dynamically determine the presentation sequence for subsequent passive and customized viewing.<br /><br />An apparatus, such as an interactive service module, can present television segments to a viewer on demand. The interactive service module can perform a method for real-time bookmarking of streaming media assets. The interactive service module may include a tuner to receive data for television segments, and a computer readable memory to store the segment data. Teasers associated with each segment may also be received by the tuner and stored in memory. Metadata may be used to identify each segment and its corresponding teaser. The metadata may also be received by the tuner and stored in memory. The metadata may be used to enable the viewer to control the presentation order of several segments that are displayed to the viewer. A presentation engine of the interactive service module may present the content based on viewer preferences.<br /><br />For example, digital Audio/Video (AV) content material, e.g. video clips representing a television news segment, may be available to the interactive service module from random access storage, either locally or through a network. For each story, represented by one or more video clips, an additional teaser video clip is available from storage. Alternatively, a table of contents (TOC) can be retrieved from storage. A teaser clip introduces a single story and gives an impression about the topic of the story. Descriptive metadata may be used by the interactive service module to identify separate stories in the video material and to identify their corresponding teasers.<br /><br />A dynamic navigation mechanism to perform real-time bookmarking may be executed by the interactive service module. The mechanism enables a viewer to send a signal to the presentation engine during the presentation of a teaser indicating that the corresponding story is of interest. The presentation of the teasers continues until all teasers have been presented, but the subsequent presentation structure of the corresponding stories is changed according to the viewer's bookmark signals. This results in a customized presentation of the bookmarked stories.<br /><br />A method for bookmarking is shown in FIG. 1. For a plurality of segments, each segment is associated with a corresponding teaser, step 110. Each teaser is displayed to the viewer in a sequential order, step 120. During the presentation of a given teaser, the viewer is enabled to send a bookmark signal indicating that the corresponding segment or story, is of interest, step 130. If the viewer sends a bookmark signal, the corresponding segment is bookmarked as of interest to the viewer, step 140. The method determines whether all teasers have been presented to the viewer, step 150. If not, the next teaser in the sequential order is displayed and steps 120 through 150 are repeated. If all teasers have been presented, then the presentation order of the segments is dynamically changed based on the bookmark signals, step 160. For example, the programs that are bookmarked may be displayed before the programs that are not bookmarked. The segments are presented to the viewer in the dynamically changed presentation order, step 170.<br /><br />Alternatively, instead of sending a bookmark signal to indicate that the story is of interest, a viewer can send a signal to indicate that the story is not of interest. The "not of interest" signal can be used to place the corresponding story at a later position in the presentation sequence, or to remove the story entirely from the presentation sequence. A neutral signal may also be sent to indicate that the viewer is neither interested nor uninterested in the corresponding program.<br /><br />The method for bookmarking and dynamically changing of the presentation order is not limited to bookmarking during the teaser presentation. In one embodiment, the method for bookmarking may also be used during a presentation of a story to indicate that the current story is of interest, but should be presented later or with reduced priority. Thus, this enables the viewer to postpone the presentation of the current story, and changes the presentation order dynamically.<br /><br />In one embodiment, a method to bookmark or postpone a story is not limited to a television news segment environment. The method can be applied to situations where a streaming media presentation order is dynamically changed based on viewer input, such as a table of contents of a video library, a music video, or an audio-only application, for example.<br /><br />FIGS. 2 through 5 show embodiments of an interactive service module for real-time bookmarking of streaming media assets. Referring now to FIG. 2, a block diagram of an on-site media system having a dedicated service module is shown, in accordance with one embodiment of the present invention. To provide a context for the dedicated service module, on-site media system 200 shows one embodiment of a larger system in which the dedicated service module may be implemented to provide a dedicated on-site media service. On-site media system 200 includes a control/data bus 202 for communicating information, a central processor unit 204 for processing information and instructions, coupled to bus 202, and a memory unit 206 for storing information and instructions, coupled to bus 202. Memory unit 206 can include random access memory (RAM) 206a, for storing temporal information and instructions for central processor unit 204, and read only memory (ROM) 206b, for storing static information and instructions for central processor unit 204. System 200 also includes a display device 218 coupled to bus 202, for viewing data, and a signal source 212, coupled to dedicated service module 210 via line 213a for providing a signal.<br /><br />On-site media system 200 also includes a dedicated service module 210, coupled to bus 202, to provide a media signal. Dedicated service module 210 can also be referred to as a dedicated media device or a dedicated service cartridge, depending on its specific configuration. Dedicated service module 210, enables the on-site media service to be implemented by providing dedicated tuning and guaranteed storage for a broadcast signal. The dedicated tuning provides a dedicated path from the broadcast stream into the guaranteed storage device. More specifically, dedicated service module 210 includes one or more dedicated tuners and one or more dedicated media storage devices, coupled to each other. More details of dedicated service module 210 are provided in subsequent figures. Dedicated service module 210 can allow for proprietary encoding of service information in datacast associated with broadcast streams with built-in support in the dedicated service module for processing the service information. The dedicated service module can also support software reconfiguration via broadcast at several different levels (e.g., device upgrade, software platform upgrade, and content upgrade).<br /><br />Signal source 211 can be any device, such as an antennae for receiving a broadcast, a cable interface for line transmission, or a dish for receiving satellite broadcast. Display device 218 of FIG. 2 can be any type of display, including an analog or a digital television, or a personal computer (PC) display. While processor 204 and memory 206 are shown as individual entities, they may be incorporated into another component. For example, processor 204 and memory 206 may be new components or may be existing components in display device 218, e.g. a digital television (DTV), dedicated service module 210, or in a set-top box (not shown). Additionally, while dedicated service module 210 is shown individually, it may be integrated into other components, such as display device 218, as shown in configuration B of subsequent FIG. 3B.<br /><br />System 200 also includes an optional Internet connection 216 coupled to bus 202 for transmitting information to, and receiving information from, the Internet. The information may be a video segment, such as an A/V dip for example. An optional user input device 212, e.g. a keypad, remote control, etc., coupled to bus 202 is also included in system 200 of FIG. 2, to provide communication between system 200 and a user. Optional local receiver/source 208, which can be a set top box in one embodiment, is coupled to bus 202 to provide a media signal. Optional local receiver/source 208 can alternatively be located inside display device 218. Optional local/receiver source 208 can allow viewer options such as simultaneous viewing of a segment through a tuner or source that is independent of the dedicated tuners of dedicated service module 210. Thus, the dedicated tuner, e.g. 201, in dedicated service module 210, always provides a dedicated path for a given medial signal.<br /><br />Bus 202 provides an exemplary coupling configuration of devices in on-site media system 200. Bus 202 is shown as a single bus line for clarity. It is appreciated by those skilled in the art that bus 202 can include subcomponents of specific data lines and/or control lines for the communication of commands and data between appropriate devices. It is further appreciated by those skilled in the art that bus 202 can be a parallel configuration, or a IEEE 1394 configuration, that bus 202 can include numerous gateways, interconnect, and translators, as appropriate for a given application.<br /><br />It is also appreciated that on-site media system 200 is exemplary only and that the present invention can operate within a number of different media systems including a commercial media system, a general purpose computer system, etc. Furthermore, the present invention is well-suited to using a host of intelligent devices that have similar components as exemplary on-site media system 200.<br /><br />Referring now to FIG. 3A, a block diagram of a dedicated service module is shown, in accordance with on embodiment of the present invention. Dedicated service module 210, also referred to as a dedicated media device, or a dedicated service cartridge depending upon the configuration, includes a media storage adapter 306, a tuner adapter 308, and interfaces 304a and 304b for tuner adapter 308 and for media storage adapter 306, respectively. Media storage adapter 306 includes appropriate mechanical and electrical components to accommodate a dedicated media storage device. Similarly, tuner adapter 308 includes appropriate mechanical and electrical components to accommodate a dedicated tuner. Media storage adapter 306 is coupled to tuner adapter 308 via one or more dedicated tuners, e.g. tuner 210a, and one or more dedicated disks, e.g. 203a, respectively coupled together in exclusive pairs, in the present embodiment.<br /><br />Interface 304a, in turn includes a multiplexed broadcast stream 213a coupled to tuner adapter 308. Interface 304b includes a two-way display device control line 316, which can be coupled to media storage adapter 306 via bus 315. In one embodiment, bus 315 can be coupled to bus 202 of FIG. 2. Interface 304b also includes an optional Internet 304b also includes an optional Internet connection 213b that may be directly coupled to one or more dedicated cartridges, e.g. open slot 313, in one embodiment. In another embodiment, only a dedicated storage device is coupled to optional Internet connection 213b because the Internet connection bypasses the need for a dedicated tuner.<br /><br />The present embodiment of dedicated service module 210 includes multiple tuners and disks, exclusively coupled to each other as shown. However, the present invention is well-suited to many different configurations. For example, one or more allocated partitions, or portions, of a single disk can be utilized in lieu of separate storage devices, e.g. one hard drive with five partitions replaces five separate hard drives. In yet another embodiment, a "gang" of multiple tuners could be cooperatively shared across a current active receiver, under the assumption that not all of the multiple broadcast signals would want to be tuned and recorded at all times. In this latter embodiment, each broadcast signal can still have a guaranteed capacity of disk storage. This latter embodiment would trade off the cost of a service module with the level of dedicated service desired.<br /><br />While the present embodiment arranges multiple tuner-storage pairs, e.g. 203a and 201a pair and 203b and 201b pair, in a parallel manner, the present invention is well-suited to alternative coupling arrangements. For example, in one embodiment, tuner-storage pairs may be daisy chained to deliver the multiplex broadcast signal to each dedicated tune.<br /><br />Bus 315, for providing multiplexed broadcast stream, is conformal to the Institute of Electrical and Electronic Engineers (IEEE) 1394 standard in one embodiment. Furthermore, two-way media/data line 316 is also compatible with the IEEE 1394 standard, in one embodiment.<br /><br />The connection to the optional local receiver, e.g. optional local receiver/source 208 of FIG. 2 (viz., a tuner in a television or Set Top Box (STB)), enables a viewer to access segmenting from dedicated service module 210 as a set of streams to complement a conventional broadcast from optional local receiver. Furthermore, the present invention is well-suited to using many different configurations of dedicated tuner-storage devices. For example, one or more dedicated media storage devices may be committed to a single dedicated tuner, thus allowing concurrent recording and viewing. Alternative embodiments are provided in subsequent figures.<br /><br />The present invention also shows one open slot 312 for an additional dedicated tuner-storage pair. However, the present invention is well-suited to providing interactive media device 210 with any number of open slots and any number of installed dedicated tuner-storage pairs.<br /><br />Additionally, dedicated storage device 210 has a modular interface to media storage adapter 306 and tuner adapter 308 in the present embodiment. That is, the present embodiment of FIG. 3A is a form-factor media tower into which a consumer can plug or unplug dedicated service cartridges, having the dedicated tuners and media storage devices, units.<br /><br />Referring now to FIG. 3B, multiple designs of a dedicated service module are shown, in accordance with one embodiment of the present invention. Configurations A-C show alternative configurations for a modular embodiment of the dedicated service module, e.g. where the dedicated tuner-disk, pairs are removable cartridges. Configuration A shows a traditional stand alone dedicated service module device. Configurations B shows an integrated dedicated service module that is built-in to a display device. Lastly, configuration C shows a stacked stand alone dedicated service module device. The dedicated tuner-storage pairs can be plugged into a back-plane of any device appropriate for consumer use. The present invention is well-suited to using any other stacking and coupling configuration for a modular dedicated service module. It is appreciated that the integrated service module devices shown in FIG. 3B are exemplary. The present invention is well-suited to a wide range of designs and configurations for the dedicated service module and the cartridge embodiment of the dedicated tuner-disk pair.<br /><br />Referring now to FIG. 4, a block diagram of another configuration of a dedicated service module is shown, in accordance with one embodiment of the present invention. Dedicated service module 310a, also referred to as a dedicated service cartridge, includes a media storage device 402, and a tune 404. In the present embodiment, both the media storage device 402 and the tuner 404 to which it is coupled, are dedicated to a specific content provider. For example, tuner 404 may be preset to receive a broadcast frequency corresponding to a national news broadcaster. In another embodiment, dedicated service module 310a can be generic cartridge that is segmented with tuning instructions suitable to tune in the appropriate broadcast signal, in response to a subscription, or to some other business module.<br /><br />Tuner 404 of FIG. 4 is coupled to adapter 406 via data line 408 to receive source signal, e.g. a broadcast spectrum. Media storage device 402 and tuner 404 are coupled via control line 410 to adapter 406 to receive instructions to tuner and/or media storage device in accordance with on-site media service software and commands, e.g. via processor 204 and memory 206 of FIG. 2. Media storage device 402 is also coupled to adapter 406 via line 416 to provide media data from media storage device to a media system, such as that shown in FIG. 2. Line 414 provides the dedicated media signal, tuner by tuner 404, to dedicated media storage 402. In another embodiment, data and control can be multiplexed on a single line. Adapter 406 allows dedicated service module 310a to interface with an interactive media system, such as the embodiment shown in FIG. 3A. As mentioned in FIG. 3A, another embodiment of a dedicated service module allows for dedicated Internet access, and thus eliminates the dedicated tuner but retains the dedicated media storage device.<br /><br />In one embodiment, dedicated service module 310a of FIG. 4 is a modular unit that a consumer can purchase and simply insert to an interactive media system. Media storage device 402 is shown as a single device in FIG. 4. However, the present invention is well-suited to using many different configurations and embodiment. In another embodiment, multiple independent read/write access mechanisms can be adapted to a single recording disk for simultaneous read/write aspects. In the present embodiment, media storage device 402 is a hard drive unit, similar to those used in PCs. However, the present invention is well-suited to using any media recording device, as is appropriate for a given application. Additionally, the tuners and disks of the dedicated service module are capable of recording and delivering a fixed number of streams, e.g. for input and output, as appropriate for the service.<br /><br />While FIG. 4 provides dedicated tuner-storage device 310a as a removable modular embodiment, it can also be configured as a fixed internal device for incorporation into a display device, such as digital television. Additionally, tuner 404 can be implemented as a digital or an analog device. While FIG. 4 shows a single media storage device allocated to a single dedicated tuner, the present invention is well-suited to different configurations. For example, in lieu of dedicated an entire media storage device to a single dedicated tuner, one embodiment of the present invention dedicates one or more partitions of a common media storage device to a single dedicated tuner. In this manner, the single common storage device can be shared among multiple tuners while still satiating the goal of guaranteed storage capacity for a broadcast signal.<br /><br />Referring now to FIG. 5, a functional block diagram of an interactive media system including content provider media system and on-site media system is shown, in accordance with one embodiment of the present invention. Interactive media system 500 includes a content provider media system 520, also referred to as content provider, and includes an on-site media system 530.<br /><br />Content provider media system 520 includes a media content database 504 that provides media content data, as indicated by the arrows, to an editing block 506 and to an encoder engine block 512. Any format of data can be stored in the media content database 504. For example, in one embodiment, the media content data stored in media content database 504 is compliant with the Moving Picture Experts Group-2 (MPEG-2) standard. Media content database 504 also communicates, as shown by arrow, with on-site media service database 502, which in turn provides data to editing block 506. On-site media service database 502 includes metadata, content options, service data and service options, function data and functional options, and interactive data and interactive options, in one embodiment. However, the present invention is well-suited to storing any other type of data that would enhance the on-site media service. These data may be commands, software code, descriptive structures, or other information useful to an on-site media system. Additionally, the granularity of the on-site media service data can range from segment-based to clip based, or shorter time-segments. Besides the data described, the present invention is well-suited to tying any other on-site media service data to the content data in order to provide an on-site media service that provides value to both content provider and viewer.<br /><br />Editing block 506 can be thought of as the segment director's editing service which takes the raw production data and formats it into a television segment. The communication link between on-site media service database 502 and media content database 504 ties the on-site media service information to the core broadcast segment content, e.g. a core audiovisual news segment. Editing block 506 passes reference information, relating to the media content desired to be transmitted, to cutlist block 510. The service information corresponding to the desired segment content to be transmitted is sent in parallel from editing block 506 to the on-site media service data block 508. The output of blocks 508 and 510 is provided in parallel with the actual content data, referenced in cutlist block 510, from media content database 504, to an encoder block 512 which subsequently provides a media signal to a user, e.g. on-site media system 530. While the present embodiment performs some editing of raw production media data, it still provides a sufficient amount of content data to a local media system to allow the viewer some options, if desired, in the selection of the content.<br /><br />In one embodiment, encoder block 512 is a transmitter that provides a terrestrial broadcast of media signal 522. However, the present invention is well-suited to any means of transmitting the media signal, such as cable or satellite. The present invention is also well-suited to a wide variety of methods for encoding data for transmission to an on-site media system.<br /><br />The present embodiment of content provider interactive media system shown in FIG. 5 can be implemented with hardware that includes a processor coupled to a memory for storing instructions and commands and method steps. The hardware implementation would also include a media storage device such as one or more hard drives coupled to the processor, a user input device and a transmitter, all coupled to the processor.<br /><br />The other component of interactive media system 500 is on-site media systems 530, which can be grouped in different sections for clarity. A first functional section 552 performs data reception in on-site media system 530. A second functional section 554 performs data recording, while a third functional section 556 performs data presentation. In data reception section 552, broadcast signal 522 is first received at a decoder functional block 532 which transmits, as shown by arrows, the decoded signal to content manager block 536. An optional information source, such as Internet data block 534, can provide additional data that can be integrated in the functional stages of on-site media system 530. Thus, for example, Internet data block 534 can automatically cache a specific Web content prior to viewer presentation in order to give the viewer a sense of instant access during the presentation. Additionally, a back channel can be enabled either via this Internet block or through other mechanisms, such as a cable modem for cable-based broadcast.<br /><br />Decoder 532 can be a dedicated tuner, such as the dedicated tuner 404 shown in FIG. 4, or the dedicated tuner portion, e.g. tuner 201a of FIG. 3A. Content manager block 536 provides a filtering function on the decoded media signal. That is, content manager block 536 segregates content from on-site media service data and sends them to respective storage devices, e.g. media content hard drive 538 for content data, and on-site media service drive 540. These separate drives are figurative in one embodiment as both signals can be tied together by writing them to a single disk. Content manager block 530 can also implement a first-level content filter that, according to subscription software, user profile, or viewer-selected options, decides whether to record the media signal, e.g. to media content hard drive 538, or to ignore the signal and not record it. Content manager can be implemented using instructions stored on memory 206 and implemented on processor 204 of on-site media hardware system 200, as shown in FIG. 2, in one embodiment.<br /><br />The next stage of on-site media system 530 is the data presentation formatting stage 556. In this stage, on-site media service information is received from on-site media service drive 540 at showflow engine block 544. Showflow engine block 544 formats and implements on-site media service data for subsequent integration with content data. Then showflow engine block 544 provides the processed data to rendering engine 542. Similarly, content data is received from dedicated media content hard drive 538 at rendering engine 542. Rendering engine 542 performs the formatting and integration of the desired images to be viewed on display device, in one embodiment. A wide variety of media elements, e.g. video, audio, text, etc., may be combined in many different formats to provide a desired composite presentation for viewing on display device 546. For example, electronic segmenting guide (EPG) information may be more dynamically formatted, including clips from the actual segment. That is, the EPG can be enabled via the present invention to allow users to view previews of any segment for which a commercial has been broadcast instead of the typical text tile of a segment in a two-dimensional grid. In another embodiment, a user segment interface that presents menus, media clips, or other data, may be overlaid onto content images for display device 546.<br /><br />Rendering engine 542 transfers presentation data to display device 546 for the final stage of presenting display 558. User input is communicated back to rendering engine 542 via line 548. User input can be received via push-button selection on set-top box or a television unit, or from an other source, such as a remote control input.<br /><br />While the present embodiment only shows a single decoder 532 and a single dedicated hard drive, e.g. disk set 538 and 540, dedicated for a single media signal, e.g. signal 522, the present invention is capable of functional blocks for multiple units in parallel, in one embodiment. In another embodiment, memory and processor resources (e.g. memory 206 and processor 204 of FIG. 2) are utilized to accomplish engine functions (e.g. rendering engine 542, content manager function 536, and showflow engine 544, as well as other engines not shown). It is appreciated that the engine functions performed on memory and processor are accomplished in a serial manner if only a single processor is implemented. In another embodiment, multiple processors can be utilized to accomplish dedicated functions in on-site media system 530, in a parallel or serial fashion.<br /><br />Referring now to FIG. 6A, a diagram of a fine-grain media stream 600 is shown, in accordance with one embodiment of the present invention. FIG. 6A illustrates segment data and duration as a physical block 601. Segment block 601 has a time span 606 over which content is presented. The present invention provides a very fine grain metadata tagging for segment content. For example, FIG. 6A shows metadata labeling at a clip level, e.g. metadata tag 603a for clip content 602a having a time span of 604. This is repeated for any quantity of clips within the segment. The present invention is well-suited to using any scale of metadata labeling, as appropriate for an application. For example, tagging clips with metadata would be appropriate for some news segments having many short clips in the segment. By using the fine-grain metadata tagging, the present invention provides the necessary data and infrastructure for an on-site media service to provide enhanced services and functions to a viewer. One such feature would be fine-grain navigation and compilation of media content related to a specific viewer interest or inquiry.<br /><br />Referring to now to FIG. 6B, a television view generated using the interactive media system is shown, in accordance with one embodiment of the present invention. Television view 650 is shown on a conventional television 658. Segment user interface 654 is provided along with a presenter 656 image, both of which are overlaid onto a core media content 652, e.g., an airplane story clip. The present invention provides the appropriate audio and associated data corresponding to the video data. Notably, the content-provider can exercise editorial content over when and what service, function, and content options are available to the viewer, e.g. through the segment user interface. This allows greater choice to a viewer while still satisfying a business model for the content provider.<br /><br />Television view 650 illustrates how the content provider, e.g. broadcaster, can control some of the recording, management formatting, and presentation of media to a user. Similarly, television view 650 illustrates how the viewer can interact with predetermined menu options to accomplish desired services and features, e.g. viewing segment user interface for alternative clips, selecting a function from a menu in segment user interface 654, or adjusting the presenter format 656. The present invention is well-suited to using any combination of these, and other, presentation formats and contents to present an on-site media service to the viewer, and or user. Furthermore, each of the several on-site media services described can be implemented independent of each other, or in any combination. The same independence exists for the interactive feature of the on-site media service.<br /><br />The method can be implemented in an environment with software controlled access to streamed media, where descriptive Metadata is used to relate teaser AV material to full length versions of the corresponding content.<br /><br />These and other embodiments of the present invention may be realized in accordance with these teachings and it should be evident that various modifications and changes may be made in these teachings without departing from the broader spirit and scope of the invention. The specification and drawings are, accordingly, to be regarded in an illustrative rather than restrictive sense and the invention measured only in terms of the claims.<br /><br /><center><b><iframe bordercolor="#000000" frameborder="0" height="640" hspace="0" marginheight="0" marginwidth="0" scrolling="no" src="http://ad.doubleclick.net/adi/N7433.148119.BLOGGEREN/B6627866.177;sz=640x640;ord=[timestamp]?;lid=41000000026530730;pid=53927;usg=AFHzDLvQcbZUkCFmQ07awjjxcy-EWW7J2Q;adurl=http%253A%252F%252Fwww.abt.com%252Fproduct%252F53927%252FApple-MC814LLA.html;pubid=548750;imgsrc=http%3A%2F%2Fcontent.abt.com%2Fmedia%2Fimages%2Fproducts%2FBDP_Images%2Fbig_w27imac_lion.jpg;width=640;height=640" vspace="0" width="640"></iframe></b></center></coma></div>
Kevin Andrew Woolseyhttp://www.blogger.com/profile/01268449682429697653noreply@blogger.com0tag:blogger.com,1999:blog-3776716555337472667.post-72120342374950629422012-06-03T20:58:00.002-07:002012-06-03T20:58:32.306-07:00Method of transmitting interactive television<div dir="ltr" style="text-align: left;" trbidi="on">
<br />
<center><img alt="[US Patent & Trademark Office, Patent Full Text and Image Database]" src="http://patft.uspto.gov/netaicon/PTO/patfthdr.gif" /><br /><table><tbody>
<tr><td align="center"><a href="http://www.uspto.gov/patft/index.html"><img alt="[Home]" border="0" src="http://patft.uspto.gov/netaicon/PTO/home.gif" valign="middle" /></a> <a href="http://patft.uspto.gov/netahtml/PTO/search-bool.html"><img alt="[Boolean Search]" border="0" src="http://patft.uspto.gov/netaicon/PTO/boolean.gif" valign="middle" /></a> <a href="http://patft.uspto.gov/netahtml/PTO/search-adv.htm"><img alt="[Manual Search]" border="0" src="http://patft.uspto.gov/netaicon/PTO/manual.gif" valign="middle" /></a> <a href="http://patft.uspto.gov/netahtml/PTO/srchnum.htm"><img alt="[Number Search]" border="0" src="http://patft.uspto.gov/netaicon/PTO/number.gif" valign="middle" /></a> <a href="http://www.uspto.gov/patft/help/help.htm"><img alt="[Help]" border="0" src="http://patft.uspto.gov/netaicon/PTO/help.gif" valign="middle" /></a></td></tr>
<tr><td align="center"><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO1&Sect2=HITOFF&d=PALL&p=1&u=%2Fnetahtml%2FPTO%2Fsrchnum.htm&r=1&f=G&l=50&s1=8,191,102.PN.&OS=PN/8,191,102&RS=PN/8,191,102#bottom"><img alt="[Bottom]" border="0" src="http://patft.uspto.gov/netaicon/PTO/bottom.gif" valign="middle" /></a></td></tr>
<tr><td align="center"><a href="http://ebiz1.uspto.gov/vision-service/ShoppingCart_P/ShowShoppingCart?backUrl1=http%3A//patft1.uspto.gov/netacgi/nph-Parser?Sect1%3DPTO1%26Sect2%3DHITOFF%26d%3DPALL%26p%3D1%26u%3D%25252Fnetahtml%25252FPTO%25252Fsrchnum.htm%26r%3D1%26f%3DG%26l%3D50%26s1%3D8,191,102.PN.%26OS%3DPN%2F8,191,102&backLabel1=Back%20to%20Document%3A%208191102"><img alt="
[View Shopping Cart]" border="0" src="http://patft.uspto.gov/netaicon/PTO/cart.gif" valign="middle" /></a> <a href="http://ebiz1.uspto.gov/vision-service/ShoppingCart_P/AddToShoppingCart?docNumber=8191102&backUrl1=http%3A//patft1.uspto.gov/netacgi/nph-Parser?Sect1%3DPTO1%26Sect2%3DHITOFF%26d%3DPALL%26p%3D1%26u%3D%25252Fnetahtml%25252FPTO%25252Fsrchnum.htm%26r%3D1%26f%3DG%26l%3D50%26s1%3D8,191,102.PN.%26OS%3DPN%2F8,191,102&backLabel1=Back%20to%20Document%3A%208191102"><img alt="[Add to Shopping Cart]" border="0" src="http://patft.uspto.gov/netaicon/PTO/order.gif" valign="middle" /></a></td></tr>
<tr><td align="center"><a href="http://patimg1.uspto.gov/.piw?Docid=08191102&homeurl=http%3A%2F%2Fpatft.uspto.gov%2Fnetacgi%2Fnph-Parser%3FSect1%3DPTO1%2526Sect2%3DHITOFF%2526d%3DPALL%2526p%3D1%2526u%3D%25252Fnetahtml%25252FPTO%25252Fsrchnum.htm%2526r%3D1%2526f%3DG%2526l%3D50%2526s1%3D8,191,102.PN.%2526OS%3DPN%2F8,191,102%2526RS%3DPN%2F8,191,102&PageNum=&Rtype=&SectionNum=&idkey=NONE&Input=View+first+page"><img alt="[Image]" border="0" src="http://patft.uspto.gov/netaicon/PTO/image.gif" valign="middle" /></a></td></tr>
</tbody></table>
</center><table><tbody>
<tr><td align="LEFT" width="50%"> </td><td align="RIGHT" valign="BOTTOM" width="50%"><span>( <strong>1</strong></span> <span>of</span> <strong><span>1</span></strong><span> )</span></td></tr>
</tbody></table>
<hr style="text-align: -webkit-auto;" />
<table><tbody>
<tr><td align="LEFT" width="50%"><b>United States Patent</b></td><td align="RIGHT" width="50%"><b><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO1&Sect2=HITOFF&d=PALL&p=1&u=%2Fnetahtml%2FPTO%2Fsrchnum.htm&r=1&f=G&l=50&s1=8,191,102.PN.&OS=PN/8,191,102&RS=PN/8,191,102#h0" name="h1"></a><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO1&Sect2=HITOFF&d=PALL&p=1&u=%2Fnetahtml%2FPTO%2Fsrchnum.htm&r=1&f=G&l=50&s1=8,191,102.PN.&OS=PN/8,191,102&RS=PN/8,191,102#h2"></a><b><i></i></b>8,191,102</b></td></tr>
<tr><td align="LEFT" width="50%"><b>Newton , et al.</b></td><td align="RIGHT" width="50%"><b>May 29, 2012</b></td></tr>
</tbody></table>
<hr style="text-align: -webkit-auto;" />
<span style="text-align: -webkit-auto;">Method of transmitting interactive television </span><br style="text-align: -webkit-auto;" /><br style="text-align: -webkit-auto;" /><center><b>Abstract</b></center><div style="text-align: -webkit-auto;">
A method (1) of transmitting interactive television, whereby interactive television applications are transmitted inside application-modules. These modules are transmitted in a broadcast stream. Recording systems cannot decide which modules are to be recorded. Therefore storage related information of said modules is signalled in the broadcast stream. Module identification information related is implemented in the Application Information Table (AIT) and/or in the Download Information Indication (DII) message. Thus information is included in the broadcast stream concerning categories stating whether application modules are mandatory, optional or forbidden to record. Alternatively properties of a module are chosen from Code/Data/Both and/or Fixed/Variable. Recording systems use this information do decide if application modules are to be recorded or disregarded. Alternatively, application module identification information is transmitted in said broadcast stream. A module identification number is used to avoid multiple recordings. Application modules having the same category are preferably grouped together.</div>
<hr style="text-align: -webkit-auto;" />
<table><tbody>
<tr><td align="LEFT" valign="TOP" width="10%">Inventors:</td><td align="LEFT" width="90%"><b>Newton; Philip Steven</b> (Eindhoven, <b>NL</b>)<b>, Kelly; Declan Patrick</b> (Eindhoven, <b>NL</b>)<b>, Tan; Jingwei</b> (Shanghai, <b>CN</b>)<b>, Shi; Jun</b> (Shanghai, <b>CN</b>)<b>, Gan; Liang</b> (Shanghai, <b>CN</b>)</td></tr>
<tr><td align="LEFT" valign="TOP" width="10%">Assignee:</td><td align="LEFT" width="90%"><b>Koninklijke Philips Electronics, N.V.</b> (Eindhoven, <b>NL</b>) </td></tr>
<tr><td align="LEFT" nowrap="" valign="TOP" width="10%">Appl. No.:</td><td align="LEFT" width="90%"><b>10/541,051</b></td></tr>
<tr><td align="LEFT" valign="TOP" width="10%">Filed:</td><td align="LEFT" width="90%"><b>December 5, 2003</b></td></tr>
<tr><td align="LEFT" valign="TOP" width="10%">PCT Filed:</td><td align="LEFT" width="90%"><b>December 05, 2003</b></td></tr>
<tr><td align="LEFT" valign="TOP" width="10%">PCT No.:</td><td align="LEFT" width="90%"><b>PCT/IB03/05789</b></td></tr>
<tr><td align="LEFT" valign="TOP" width="15%">371(c)(1),(2),(4) Date:</td><td align="LEFT" width="85%"><b>June 29, 2005</b></td></tr>
<tr><td align="LEFT" nowrap="" valign="TOP" width="10%">PCT Pub. No.:</td><td align="LEFT" width="90%"><b>WO2004/059973</b></td></tr>
<tr><td align="LEFT" nowrap="" valign="TOP" width="10%">PCT Pub. Date:</td><td align="LEFT" width="90%"><b>July 15, 2004</b></td></tr>
</tbody></table>
<hr style="text-align: -webkit-auto;" />
<center><b>Foreign Application Priority Data</b></center><hr align="center" style="text-align: -webkit-auto;" width="30%" />
<table><tbody>
<tr><td></td><td></td><td></td><td></td><td></td></tr>
<tr><td align="center">Dec 30, 2002 [EP]</td><td></td><td></td><td align="left">020805974</td></tr>
<tr><td align="center"></td></tr>
</tbody></table>
<div style="text-align: -webkit-auto;">
</div>
<table><tbody>
<tr><td align="LEFT" valign="TOP" width="40%"><b>Current U.S. Class:</b></td><td align="RIGHT" valign="TOP" width="80%"><b>725/136</b> ; 725/142; 725/50</td></tr>
<tr><td align="LEFT" valign="TOP" width="40%"><b>Current International Class:</b></td><td align="RIGHT" valign="TOP" width="80%">H04N 7/16 (20110101)</td></tr>
<tr><td align="LEFT" valign="TOP" width="40%"><b>Field of Search:</b></td><td align="RIGHT" valign="TOP" width="80%">725/114-117,135,136,142,140,145-147,50</td></tr>
</tbody></table>
<br />
<hr style="text-align: -webkit-auto;" />
<center><b>References Cited <a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2Fsearch-adv.htm&r=0&f=S&l=50&d=PALL&Query=ref/8191102">[Referenced By]</a></b></center><hr style="text-align: -webkit-auto;" />
<center><b>U.S. Patent Documents</b></center><table><tbody>
<tr><td width="33%"></td><td width="33%"></td><td width="34%"></td></tr>
<tr><td align="left"><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&d=PALL&RefSrch=yes&Query=PN%2F5625693">5625693</a></td><td align="left">April 1997</td><td align="left">Rohatgi et al.</td></tr>
<tr><td align="left"><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&d=PALL&RefSrch=yes&Query=PN%2F5768539">5768539</a></td><td align="left">June 1998</td><td align="left">Metz et al.</td></tr>
<tr><td align="left"><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&d=PALL&RefSrch=yes&Query=PN%2F6427238">6427238</a></td><td align="left">July 2002</td><td align="left">Goodman et al.</td></tr>
<tr><td align="left"><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&d=PALL&RefSrch=yes&Query=PN%2F6536041">6536041</a></td><td align="left">March 2003</td><td align="left">Knudson et al.</td></tr>
<tr><td align="left"><a href="http://appft1.uspto.gov/netacgi/nph-Parser?TERM1=20040128699&Sect1=PTO1&Sect2=HITOFF&d=PG01&p=1&u=%2Fnetahtml%2FPTO%2Fsrchnum.html&r=0&f=S&l=50" target="_blank">2004/0128699</a></td><td align="left">July 2004</td><td align="left">Delpuch et al.</td></tr>
<tr><td align="left"></td></tr>
</tbody></table>
<center><b>Foreign Patent Documents</b></center><table><tbody>
<tr><td></td><td></td><td></td><td></td><td></td><td></td></tr>
<tr><td align="left"></td><td align="left">WO 01/33852</td><td></td><td align="left">May., 2001</td><td></td><td align="left">WO</td></tr>
<tr><td align="left"></td><td align="left">WO0133852</td><td></td><td align="left">May., 2001</td><td></td><td align="left">WO</td></tr>
<tr><td align="left"></td><td align="left">WO0201866</td><td></td><td align="left">Jan., 2002</td><td></td><td align="left">WO</td></tr>
<tr><td align="left"></td><td align="left">WO0201866</td><td></td><td align="left">Jan., 2002</td><td></td><td align="left">WO</td></tr>
<tr><td align="left"></td></tr>
</tbody></table>
<br style="text-align: -webkit-auto;" /><center><b>Other References</b></center><table><tbody>
<tr><td><align=left><br />P Perrot; DVB-HTML An Optional Declarative Language Within MHP 1.1; Sep. 2001; pp. 1-16. cited by other.</align=left></td></tr>
</tbody></table>
<br style="text-align: -webkit-auto;" /><i style="text-align: -webkit-auto;">Primary Examiner:</i><span style="background-color: white; text-align: -webkit-auto;"> Kumar; Pankaj </span><br style="text-align: -webkit-auto;" /><i style="text-align: -webkit-auto;">Assistant Examiner:</i><span style="background-color: white; text-align: -webkit-auto;"> Newlin; Timothy </span><br style="text-align: -webkit-auto;" /><hr style="text-align: -webkit-auto;" />
<center><b><i>Claims</i></b></center><hr style="text-align: -webkit-auto;" />
<br style="text-align: -webkit-auto;" /><br style="text-align: -webkit-auto;" /><span style="background-color: white; text-align: -webkit-auto;">The invention claimed is:</span><br style="text-align: -webkit-auto;" /><br style="text-align: -webkit-auto;" /><span style="background-color: white; text-align: -webkit-auto;">1. A method of transmitting interactive television, whereby at least an interactive television application is transmitted inside application-modules in a broadcast stream that includes television content, wherein said method facilitates recording of said broadcast stream at a receiver, said method comprising the step of: including storage related information for each of said application-modules of the interactive application in said broadcast stream; and transmitting said broadcast stream including said application-modules and said storage related information, wherein said storage related information categorizes each said application-module alternatively as (i) mandatory for recording, (ii) optional for recording or (iii) forbidden for recording at the receiver, wherein the mandatory application-modules contain files that are critical for running the corresponding interactive application from storage, and wherein the optional application-modules comprise non-mandatory application-modules for use when running the corresponding interactive application from storage which contain (a) files that offer the corresponding interactive application extra features and (b) configuration files of the corresponding interactive application that must always be downloaded from a live broadcast stream in order to have the corresponding interactive application up-to-date when the corresponding interactive application is run. </span><br style="text-align: -webkit-auto;" /><br style="text-align: -webkit-auto;" /><span style="background-color: white; text-align: -webkit-auto;">2. The method as claimed in claim 1, wherein said interactive television application is transmitted as at least one application object inside DSMCC-modules in said broadcast stream. </span><br style="text-align: -webkit-auto;" /><br style="text-align: -webkit-auto;" /><span style="background-color: white; text-align: -webkit-auto;">3. The method as claimed in claim 2, wherein said at least one application object comprises at least one application file object and at least one application directory object, said application file object comprising at least one application file and said at least one application directory object comprising storage directory information on respective application file. </span><br style="text-align: -webkit-auto;" /><br style="text-align: -webkit-auto;" /><span style="background-color: white; text-align: -webkit-auto;">4. The method as claimed in claim 1, wherein said storage related information further comprises: module identification information. </span><br style="text-align: -webkit-auto;" /><br style="text-align: -webkit-auto;" /><span style="background-color: white; text-align: -webkit-auto;">5. The method as claimed in claim 4, wherein the step of including storage related information comprises: including said storage related information in an Application Information Table (AIT) and/or in a Download Information Indication message. </span><br style="text-align: -webkit-auto;" /><br style="text-align: -webkit-auto;" /><span style="background-color: white; text-align: -webkit-auto;">6. The method as claimed in claim 4, wherein said module identification information is defined and included in an Application Information Table (AIT) and consists of an application identifier having two fields, the first field being an organisation_id and the second field being an application_id, wherein said organization_id and said application_id values are used to identify identical applications in different broadcasts so that, with respect to recording of said broadcast stream at the receiver, any given application is stored only once on a specific storage medium. </span><br style="text-align: -webkit-auto;" /><br style="text-align: -webkit-auto;" /><span style="background-color: white; text-align: -webkit-auto;">7. The method as claimed in claim 1, wherein said storage related information further comprises properties of an application-module chosen from a) Code and/or Data and/or b) Fixed or Variable, wherein each application-module property is flagged via a corresponding flag as one selected from the group consisting of a): a.sub.1) code, a.sub.2) data, and a.sub.3) both code and data, and b): b.sub.1) fixed and b.sub.2) variable. </span><br style="text-align: -webkit-auto;" /><br style="text-align: -webkit-auto;" /><span style="background-color: white; text-align: -webkit-auto;">8. The method as claimed in claim 7, wherein a Digital Storage Media Command and Control generator generates groups of application-modules with similar storage related information via use of the application-module property flags in an object carousel for broadcasting, wherein fixed files are grouped together, and wherein code files are grouped together, data files are grouped together, and the grouped together code files and the grouped together data files are stored separately in respective separate modules. </span><br style="text-align: -webkit-auto;" /><br style="text-align: -webkit-auto;" /><span style="background-color: white; text-align: -webkit-auto;">9. A method of receiving an interactive television broadcast stream for recording, whereby at least an interactive television application is comprised in the broadcast stream inside application-modules, said method comprising the steps of: extracting storage related information for each of said application-modules of the interactive application from said broadcast stream; and recording application-modules which are mandatory for recording, based on said storage related information, wherein said storage related information categorizes each said application-module alternatively as (i) mandatory for recording, (ii) optional for recording or (iii) forbidden for recording at a receiver, wherein the mandatory application-modules contain files that are critical for running the corresponding interactive application from storage, and wherein the optional application-modules comprise non-mandatory application-modules for use when running the corresponding interactive application from storage which contain (a) files that offer the corresponding interactive application extra features and (b) configuration files of the corresponding interactive application that must always be downloaded from a live broadcast stream in order to have the corresponding interactive application up-to-date when the corresponding interactive application is run. </span><br style="text-align: -webkit-auto;" /><br style="text-align: -webkit-auto;" /><span style="background-color: white; text-align: -webkit-auto;">10. The method as claimed in claim 9, wherein said method further comprises the step of: recording application-modules which are optional for recording, based on said storage related information. </span><br style="text-align: -webkit-auto;" /><br style="text-align: -webkit-auto;" /><span style="background-color: white; text-align: -webkit-auto;">11. The method as claimed in claim 9, wherein said method further comprises the steps of: identifying identical application-modules in different broadcasts, and storing only one copy of identical application-modules on a specific storage medium. </span><br style="text-align: -webkit-auto;" /><br style="text-align: -webkit-auto;" /><span style="background-color: white; text-align: -webkit-auto;">12. The method as claimed in claim 9, whereby said interactive television is MHP, OpenTV or DASE. </span><br style="text-align: -webkit-auto;" /><br style="text-align: -webkit-auto;" /><span style="background-color: white; text-align: -webkit-auto;">13. An apparatus for recording and/or playing back interactive television, said apparatus being adapted to record and/or playback an interactive television broadcast stream to and from a storage medium, said apparatus being adapted to receive said interactive television broadcast stream, said broadcast stream including television content, an interactive television application contained in modules, and storage related information for each of said modules, said apparatus comprising: means for extracting said storage related information of said modules of the interactive application from said broadcast stream; and means for recording said modules in dependence on said storage related information, wherein said storage related information categorizes each said module alternatively as (i) mandatory for recording, (ii) optional for recording or (iii) forbidden for recording at a receiver, wherein the mandatory modules contain files that are critical for running the corresponding interactive application from storage, and wherein the optional modules comprise non-mandatory application-modules for use when running the corresponding interactive application from storage which contain (a) files that offer the corresponding interactive application extras features and (b) configuration files of the corresponding interactive application that must always be downloaded from a live broadcast stream in order to have the corresponding interactive application up-to-date when the corresponding interactive application is run, and said means for recording being adapted to record only modules for which said storage related information allows recording. </span><br style="text-align: -webkit-auto;" /><br style="text-align: -webkit-auto;" /><span style="background-color: white; text-align: -webkit-auto;">14. The apparatus as claimed in claim 13, wherein said storage related information comprises module identification information for modules, and wherein said apparatus further comprises: means for preventing recording of more than one application module in different broadcasts with identical module identification information on a storage medium in said apparatus. </span><br style="text-align: -webkit-auto;" /><br style="text-align: -webkit-auto;" /><span style="background-color: white; text-align: -webkit-auto;">15. A non-transitory computer-readable medium having embodied thereon a computer program for processing by a computer, said computer program causing said computer to prepare and transmit an interactive television broadcast stream facilitating recording by a receiver, the computer program comprising: a code segment for causing the computer to include application modules and storage related information for each of the application modules in an interactive television broadcast stream, at least an interactive television application being included inside said application modules, and a code segment for causing the computer to transmit the interactive television broadcast stream, wherein said storage related information categorizes each said application module alternatively as (i) mandatory for recording, (ii) optional for recording or (iii) forbidden for recording, wherein the mandatory application modules contain files that are critical for running the corresponding interactive application from storage, and wherein the optional application modules comprise non-mandatory application modules for use when running the corresponding interactive application from storage which contain (a) files that offer the corresponding interactive application extra features and (b) configuration files of the corresponding interactive application that must always be downloaded from a live broadcast stream in order to have the corresponding interactive application up-to-date when the corresponding interactive application is run.</span><hr style="text-align: -webkit-auto;" />
<center><b><i>Description</i></b></center><hr style="text-align: -webkit-auto;" />
<br style="text-align: -webkit-auto;" /><br style="text-align: -webkit-auto;" /><span style="background-color: white; text-align: -webkit-auto;">This invention relates in general to the field of interactive television and more particularly to the recording of interactive television contents and even more particularly to handling of applications in the field of recording of interactive television contents. </span><br style="text-align: -webkit-auto;" /><br style="text-align: -webkit-auto;" /><span style="background-color: white; text-align: -webkit-auto;">Interactive television (iTV) is becoming more and more popular. An example of interactive television is the Multimedia Home Platform (MHP), which is a digital video broadcasting (DVB) standard intended to combine digital television (DTV) with interactivity and access to the Internet and the World Wide Web. DTV service providers offer a large variety of audio-visual (A/V) television programs and also of applications allowing the interaction of the viewer/user with the TV set and its contents. </span><br style="text-align: -webkit-auto;" /><br style="text-align: -webkit-auto;" /><span style="background-color: white; text-align: -webkit-auto;">Similar to today's video recorders for analogue television broadcasts using video tapes for recording broadcast streams, digital video recorders for interactive television are developed using either a harddisk or removable media such as optical discs for storing recorded broadcasts. The digital video recorders for interactive television record both A/V television contents and applications for playback at a later point in time. </span><br style="text-align: -webkit-auto;" /><br style="text-align: -webkit-auto;" /><span style="background-color: white; text-align: -webkit-auto;">MHP applications are transmitted inside modules through a Digital Storage Media Command and Control (DSMCC) object carousel. The DSMCC object carousel defines how and when to send modules/files down a broadcast channel. There is no connection to the server for a receiving device to ask for wanted files. All files are repeatedly sent all the time, e.g. once per 10 seconds. MHP terminals look for the files they need as they come round. The modules contain the files that the MHP application needs to run. Some files are part of an application itself. Whilst other files can be left out or only have relevance a particular instance, for example configuration files. For example, a broadcaster develops a segmented latest news application and transmits this together with the latest news. The broadcaster develops the application only once, as it is configured for the news of a particular day by use of e.g. update configuration or metadata files. </span><br style="text-align: -webkit-auto;" /><br style="text-align: -webkit-auto;" /><span style="background-color: white; text-align: -webkit-auto;">Thus, when recording MHP applications, some modules might not need to be recorded. It is a problem that the MHP recording system cannot determine which modules are to be recorded and which are not to be recorded. Furthermore, not all the modules may contain files that are necessary to record, e.g. some modules may contain files, such as configuration files, that must always be loaded from the live broadcast stream in order to have the application up-to-date when the application is run. On the other hand, some files have to be recorded, in order to be able to run the recorded application at a later point of time, as the application program file will not be available at that point of time. </span><br style="text-align: -webkit-auto;" /><br style="text-align: -webkit-auto;" /><span style="background-color: white; text-align: -webkit-auto;">Furthermore, it is a problem that storage space is limited on every storage media. Therefore it is desirable to keep the amount of space used for recording applications on a storage medium as low as possible, in order to be able to record as much iTV content as possible on the storage medium. </span><br style="text-align: -webkit-auto;" /><br style="text-align: -webkit-auto;" /><span style="background-color: white; text-align: -webkit-auto;">The present invention overcomes the above-identified deficiencies in the art and solves the above problems by providing a method, an apparatus and a signal according to the appended independent claims. </span><br style="text-align: -webkit-auto;" /><br style="text-align: -webkit-auto;" /><span style="background-color: white; text-align: -webkit-auto;">The general solution according to the invention is to signal recording/storage related properties of the modules, i.e. to signal e.g. which modules are mandatory and which modules are optional to record, and/or to signal other properties, which allow optimisation of recording. </span><br style="text-align: -webkit-auto;" /><br style="text-align: -webkit-auto;" /><span style="background-color: white; text-align: -webkit-auto;">More particularly, in order to enable the recording system to determine which modules are to be recorded, it is, according to a preferred embodiment of the invention, indicated in the iTV broadcast which modules are mandatory to record and/or which modules are optionally to record and/or which modules are forbidden to record. According to an embodiment of the invention, the broadcaster signals, e.g. in the Application Information Table (AIT) and/or in the Download Information Indication (DII) message, which modules are optional and which modules are compulsory or forbidden to record. Compulsory modules contain files that are critical for running the application from storage. Optional modules contain files that offer the application extra features or contain configuration files that always must be loaded from the live broadcast. </span><br style="text-align: -webkit-auto;" /><br style="text-align: -webkit-auto;" /><span style="background-color: white; text-align: -webkit-auto;">According to one aspect of the invention, a method is provided, which is a method of transmitting interactive television whereby at least an interactive television application is transmitted inside DMSCC-modules in a broadcast stream. The method comprises the step of signalling storage related information of the modules in said broadcast stream. </span><br style="text-align: -webkit-auto;" /><br style="text-align: -webkit-auto;" /><span style="background-color: white; text-align: -webkit-auto;">According to another aspect of the invention, another method is provided, which is a method for receiving an interactive television broadcast stream for recording, whereby at least an interactive television application carried in modules is transmitted in the broadcast stream. The method comprises the steps of extracting storage related information of said modules in said broadcast stream and recording of modules which are mandatory to record. The recording is based on the storage related information, i.e. the storage related information is used as control information on whether to record the application or not. </span><br style="text-align: -webkit-auto;" /><br style="text-align: -webkit-auto;" /><span style="background-color: white; text-align: -webkit-auto;">According to yet another aspect of the invention, an apparatus for recording and/or playing back interactive television is provided. The apparatus is adapted to record interactive television from a broadcast transport stream (TS) to a storage medium. Optionally, the apparatus is also adapted to playing back interactive television from a storage medium. The apparatus comprises means for extracting storage related information of said modules from said broadcast stream, and means for recording of these modules, whereby only modules are recorded for which the storage related information allows recording. </span><br style="text-align: -webkit-auto;" /><br style="text-align: -webkit-auto;" /><span style="background-color: white; text-align: -webkit-auto;">According to a further aspect of the invention, a computer-readable medium having embodied thereon a computer program for processing by a computer is provided. The computer program comprises a code segment for signalling storage related information of modules in an interactive television broadcast-stream, whereby at least an interactive television application is transmitted inside application modules, preferably DSMCC-modules, in a broadcast stream. </span><br style="text-align: -webkit-auto;" /><br style="text-align: -webkit-auto;" /><span style="background-color: white; text-align: -webkit-auto;">According to yet a further aspect of the invention a signal for transmitting interactive television is provided. The signal comprises a broadcast transport stream of interactive television contents. The contents comprises at least an interactive television application, whereby the latter comprises modules being transmitted by said signal. The signal comprises modules, and storage related information and/or module identification information in the broadcast stream. </span><br style="text-align: -webkit-auto;" /><br style="text-align: -webkit-auto;" /><span style="background-color: white; text-align: -webkit-auto;">According to another aspect of the invention, a graphical user interface for an interactive television DSMCC generator is provided for specification of storage related information of modules to be transmitted inside DSMCC-modules in a broadcast stream. </span><br style="text-align: -webkit-auto;" /><br style="text-align: -webkit-auto;" /><span style="background-color: white; text-align: -webkit-auto;">Preferably, the application-modules are transported inside DSMCC-modules in the broadcast stream. </span><br style="text-align: -webkit-auto;" /><br style="text-align: -webkit-auto;" /><span style="background-color: white; text-align: -webkit-auto;">Preferred embodiments of the present invention will be described in the following detailed disclosure, reference being made to the accompanying drawings, in which </span><br style="text-align: -webkit-auto;" /><br style="text-align: -webkit-auto;" /><span style="background-color: white; text-align: -webkit-auto;">FIG. 1 shows a flow chart of an embodiment of the invention, </span><br style="text-align: -webkit-auto;" /><br style="text-align: -webkit-auto;" /><span style="background-color: white; text-align: -webkit-auto;">FIG. 2 shows a flow chart of another embodiment of the invention, </span><br style="text-align: -webkit-auto;" /><br style="text-align: -webkit-auto;" /><span style="background-color: white; text-align: -webkit-auto;">FIG. 3 illustrates in a schematic diagram an apparatus according to an embodiment of the invention, </span><br style="text-align: -webkit-auto;" /><br style="text-align: -webkit-auto;" /><span style="background-color: white; text-align: -webkit-auto;">FIG. 4 shows a schematic diagram of a computer-readable medium according to another embodiment of the invention, </span><br style="text-align: -webkit-auto;" /><br style="text-align: -webkit-auto;" /><span style="background-color: white; text-align: -webkit-auto;">FIG. 5 is a schematic diagram of a signal according to yet another embodiment of the invention, and </span><br style="text-align: -webkit-auto;" /><br style="text-align: -webkit-auto;" /><span style="background-color: white; text-align: -webkit-auto;">FIG. 6 is a schematic diagram of a user interface according to another embodiment of the invention. </span><br style="text-align: -webkit-auto;" /><br style="text-align: -webkit-auto;" /><span style="background-color: white; text-align: -webkit-auto;">The term "modules" as used in the disclosure of the present invention is defined as logical entities, which are used in the transmission of files. The mentioned (DSMCC) object carousel is generally designed to broadcast an entire directory/file structure. It does this by encapsulating the files into objects and transmitting the directory names and the files themselves in special types of objects, i.e. directory objects and file objects. The directory objects contain preferably the name and the path of the files under that directory. The file objects carry the files. These objects in their turn are transported in groups of e.g. two or three, depending on the size of the objects. These groups are the logical entities referred to as "modules". At the receiving end when the modules are received some modules may contain objects, which in turn may encapsulate files, which do not need to be recorded or which are mandatory to record. The invention allows the receiving end to determine which modules are mandatory to record and which need not be recorded. </span><br style="text-align: -webkit-auto;" /><br style="text-align: -webkit-auto;" /><span style="background-color: white; text-align: -webkit-auto;">In a preferred embodiment of the invention according to FIG. 1, a method 1 includes the step 10 of signalling storage related information of modules in an interactive television broadcast stream. </span><br style="text-align: -webkit-auto;" /><br style="text-align: -webkit-auto;" /><span style="background-color: white; text-align: -webkit-auto;">In another embodiment of the invention, signalling this storage related information is implemented in the Application Information Table, abbreviated AIT. The AIT includes an extra file and/or a sub-table and/or the application storage descriptor that contains the list of module IDs and a field containing information on the storage characteristics of respective module. In an example of this preferred embodiment, according to Table 1, the AIT includes an extra subsection that contains the list of module IDs (moduleID) and/or a field (storage_feature) stating whether respective module is mandatory, optional or forbidden to record. In the example of Table 1, information for N modules is provided by a loop running from N=0 to N=(N-1), whereby the exemplary application possesses N modules. </span><br style="text-align: -webkit-auto;" /><br style="text-align: -webkit-auto;" /><span style="background-color: white; text-align: -webkit-auto;">TABLE-US-00001 TABLE 1 Syntax of application storage descriptor No. of Bits Identifier application_storage_descriptor( ) { descriptor-tag 8 uimsbf descriptor-length 8 uimsbf storage_property 8 uimsbf not_launchable_from_broadcast 1 bslbf reserved 7 bslbf version 32 uimsbf priority 8 uimsbf modulesCount(N) 16 uimsbf For(i=O; i<N; i++) { moduleID 16 uimsbf storage_feature 2 uimsbf } } </span><br style="text-align: -webkit-auto;" /><br style="text-align: -webkit-auto;" /><span style="background-color: white; text-align: -webkit-auto;">In this way, a storage feature is defined for each module comprised in an application, whereby the application comprises at least one module. </span><br style="text-align: -webkit-auto;" /><br style="text-align: -webkit-auto;" /><span style="background-color: white; text-align: -webkit-auto;">A non-limiting example is to define the storage feature field (storage_feature) as 0 corresponding to forbidden, 1 to mandatory and 2 to optional. In the example according to Table 1, two bits are reserved for this purpose. </span><br style="text-align: -webkit-auto;" /><br style="text-align: -webkit-auto;" /><span style="background-color: white; text-align: -webkit-auto;">Alternatively, a module description file is defined, which lists the module's ID (module ID) and its storage feature (storage_feature). This file is generated by the DSMCC generator or the MHP mux and not by the user. This file can be generated according to the extension a file has or by a determining which files are application files and which files are (meta)data The application files must always be recorded for an application to work. </span><br style="text-align: -webkit-auto;" /><br style="text-align: -webkit-auto;" /><span style="background-color: white; text-align: -webkit-auto;">According to a further preferred embodiment of the invention, the indication of mandatory, optional or forbidden to record is implemented in the Download Information Indication (DII) message of the broadcast stream. </span><br style="text-align: -webkit-auto;" /><br style="text-align: -webkit-auto;" /><span style="background-color: white; text-align: -webkit-auto;">There is one moduleInfo loop in the DII message, and one userInfo loop in the Broadcast Inter Object Request Broker Protocol (BIOP) moduleInfo loop. According to the embodiment of the invention, the indication is placed in the userInfo field. An example for a descriptor is defined according to Table 2. </span><br style="text-align: -webkit-auto;" /><br style="text-align: -webkit-auto;" /><span style="background-color: white; text-align: -webkit-auto;">TABLE-US-00002 TABLE 2 Descriptor structure example No. of Bits Identifier record-option descriptor { descriptor_tag 8 uimsbf option_type 2 uimsbf } </span><br style="text-align: -webkit-auto;" /><br style="text-align: -webkit-auto;" /><span style="background-color: white; text-align: -webkit-auto;">In this example, the descriptor_tag is used to identify the descriptor, whereas the option_type is used to discriminate the indication of the modules' recording option. The option_type is in a non-limiting example defined as option_type=0 corresponding to forbidden, option_type=1 corresponding to mandatory and option_type =2 corresponding to optional. </span><br style="text-align: -webkit-auto;" /><br style="text-align: -webkit-auto;" /><span style="background-color: white; text-align: -webkit-auto;">A combination of the above two preferred embodiments forms another embodiment of the invention. The AIT e.g. includes the list of application files related to an application and the DII message indicates the storage feature of the particularapplication file in a certain module, wherein the modules are related to the broadcast in the object carousel, as mentioned above. </span><br style="text-align: -webkit-auto;" /><br style="text-align: -webkit-auto;" /><span style="background-color: white; text-align: -webkit-auto;">To implement the above methods, the user needs to give input information on the file level, i.e. which files are mandatory and which files are optional. This is implemented in the DSMCC generator or MHP Mux. Necessary information is generated in the AIT or DII message. </span><br style="text-align: -webkit-auto;" /><br style="text-align: -webkit-auto;" /><span style="background-color: white; text-align: -webkit-auto;">That means that new features/functions are added to the DSMCC generator/MHP mux. </span><br style="text-align: -webkit-auto;" /><br style="text-align: -webkit-auto;" /><span style="background-color: white; text-align: -webkit-auto;">In the following, two examples of User Interfaces (UI) for the MHP Mux of this feature will be given: </span><br style="text-align: -webkit-auto;" /><br style="text-align: -webkit-auto;" /><span style="background-color: white; text-align: -webkit-auto;">Firstly, when adding files or objects of the MHP application in the MHP Mux, each file will have a checkpoint to show whether it is mandatory or optional. An example for such a UI is illustrated in FIG. 6. </span><br style="text-align: -webkit-auto;" /><br style="text-align: -webkit-auto;" /><span style="background-color: white; text-align: -webkit-auto;">The user is asked to choose if a file is mandatory or not and the default choice is mandatory. Another way is to ask the user to choose mandatory files first and then to specify the others. </span><br style="text-align: -webkit-auto;" /><br style="text-align: -webkit-auto;" /><span style="background-color: white; text-align: -webkit-auto;">Secondly, in the MHP 1.1 standard, there is an Application Description File for storable applications. This files' storage feature is modified. It is important, that the DSMCC generator or MHP mux understands this file and gets each object's storage feature information from it. </span><br style="text-align: -webkit-auto;" /><br style="text-align: -webkit-auto;" /><span style="background-color: white; text-align: -webkit-auto;">In order to check with the user, the MHP Mux e.g. pops up a window to show each file's storage feature based on this application's description file. </span><br style="text-align: -webkit-auto;" /><br style="text-align: -webkit-auto;" /><span style="background-color: white; text-align: -webkit-auto;">In addition to the above, a broadcaster may broadcast the same application with a plurality of programmes. The same application is e.g. regularly sent with football highlights programmes. By introducing further signalling in the broadcast, the storage system is optimised, i.e. applications are only stored once on a specific storage medium, thus saving storage space. The invention takes in this case advantage of the application identifier, which is defined and included in the AIT. It consists of two fields, an organisation_id (32 bits) and an application_id (16 bits). These values are used to identify the same application in different broadcasts. </span><br style="text-align: -webkit-auto;" /><br style="text-align: -webkit-auto;" /><span style="background-color: white; text-align: -webkit-auto;">In another embodiment, the following properties of a module are signalled: a) Code/Data/Both and/or b) Fixed/Variable. </span><br style="text-align: -webkit-auto;" /><br style="text-align: -webkit-auto;" /><span style="background-color: white; text-align: -webkit-auto;">According to this semantics, a module flagged as Code indicates that the files included in the module are executable code (xlets), whereas a module flagged as Data does not contain any code files. The Fixed/Variable flag indicates if the content of this module is fixed for each broadcast of the application or variable. Fixed modules need only be stored once, whereas variable modules need to be stored each time and linked to the specific recording. </span><br style="text-align: -webkit-auto;" /><br style="text-align: -webkit-auto;" /><span style="background-color: white; text-align: -webkit-auto;">In the case that modules are mandatory to record, i.e. the application cannot run without them, and the modules are furthermore flagged as Fixed, they do not have to be recorded if they already have been previously recorded on the same storage medium. In this way, multiple recording is avoided, and storage space is not unnecessarily occupied on the storage medium. </span><br style="text-align: -webkit-auto;" /><br style="text-align: -webkit-auto;" /><span style="background-color: white; text-align: -webkit-auto;">According to yet another embodiment of the invention, the system groups files when generating the DSMCC carousel, in order to make the best use of the above mentioned flags. Fixed files are grouped together in modules. Code files and data files are grouped together and stored separately. In this way, the storage of the modules is optimised, i.e. access to the files is generally faster. Furthermore, the implementation of a recording system may be simplified, when equipped with a file version control. Generally, data files change more often then code files. Therefore, a separate version control for both file categories is preferably used. </span><br style="text-align: -webkit-auto;" /><br style="text-align: -webkit-auto;" /><span style="background-color: white; text-align: -webkit-auto;">As described above, this module related storage information is signalled in the AIT and/or the DII message. The above-described syntax of the previous embodiments is in this case extended to add this further information. </span><br style="text-align: -webkit-auto;" /><br style="text-align: -webkit-auto;" /><span style="background-color: white; text-align: -webkit-auto;">FIG. 2 shows a flow chart of a method 2 according to a preferred embodiment of the invention. The method 2 is a method for receiving an interactive television broadcast stream for recording, whereby at least an interactive television application transmitted inside object carousel modules is comprised in the broadcast stream. The method 2 comprises the step 20 of extracting storage related information of said modules in said broadcast stream and the step 21 of recording modules which are mandatory or optional to record. The recording is based on the storage related information extracted-in step 20, i.e. the obtained storage related information is used as control information on whether to record the application or not. </span><br style="text-align: -webkit-auto;" /><br style="text-align: -webkit-auto;" /><span style="background-color: white; text-align: -webkit-auto;">FIG. 3 illustrates in a schematic diagram an apparatus according to an embodiment of the invention. According to FIG. 3, an apparatus 3 for recording and/or playing back interactive television is provided. The apparatus is adapted to record interactive television from a broadcast transport stream (TS) 33 to a storage medium 32. Optionally, the apparatus 3 is also adapted to playing back interactive television from a storage medium 32. The apparatus comprises means 30 for extracting storage related information of said modules from said broadcast stream, and means 31 for recording of modules. Means 30 and 31 are operatively connected in order to only record modules from the TS for which the storage related information allows or permits recording. </span><br style="text-align: -webkit-auto;" /><br style="text-align: -webkit-auto;" /><span style="background-color: white; text-align: -webkit-auto;">In FIG. 4 a schematic diagram of a computer-readable medium 4 according to another embodiment of the invention is shown. The computer-readable medium 4 has embodied thereon a computer program for processing by a computer 41. The computer program comprises a code segment 42 for signalling storage related information of application modules in an interactive television broadcast stream. </span><br style="text-align: -webkit-auto;" /><br style="text-align: -webkit-auto;" /><span style="background-color: white; text-align: -webkit-auto;">A schematic diagram of a signal 5 according to yet another embodiment of the invention, is illustrated in FIG. 5. The signal 5 is a signal for transmitting interactive television contents including applications. The signal 5 comprises a broadcast transport stream of interactive television contents. The contents comprises at least an interactive television application 51, whereby the latter comprises modules being transmitted by the signal 5. The signal 5 comprises modules, and storage related information 52 and/or module identification information in the broadcast stream. </span><br style="text-align: -webkit-auto;" /><br style="text-align: -webkit-auto;" /><span style="background-color: white; text-align: -webkit-auto;">An example for elements comprised in a layout of a user interface 6 according to another embodiment of the invention is shown in FIG. 6. A graphical user interface 6 for an interactive television DSMCC generator is provided for specification of storage related information of application modules to be transmitted in a broadcast stream. FIG. 6 depicts an example of a screen shot of said user interface. An indicator 61 shows that an apparatus for configuring applications for interactive television, which uses the graphical interface, is in the Add Object Mode. In a window 62, a file name for an application to be configured is by appropriate means, such as a keyboard or a mouse, entered and displayed and then selected by a button 63. By application of button 64, the application entered in window 62 is transferred to the window 65 shown on the right side of FIG. 6. This window 65 contains a list 7 of applications and their storage feature. In the example of FIG. 6, the applications "match" and "member" are previously set to mandatory. The application "Fifa" has been transferred to window 65 by pressing button 64. The storage information of the application "Fifa" is selected with a button 68, e.g. from a dropdown list appearing, when pressing button 68. </span><br style="text-align: -webkit-auto;" /><br style="text-align: -webkit-auto;" /><span style="background-color: white; text-align: -webkit-auto;">The present invention has been described above with reference to specific embodiments. However, other embodiments than the preferred above are equally possible within the scope of the appended claims, e.g. any form of interactive TV, such as MHP, OpenTV, Digital TV Application Software Environment (DASE), or storage media such as DVD, SFFO (Small Form Factor Optical Storage), etc. Furthermore an application might use a plurality of modules, and hardware or software can perform the invention. Equally, other coding methods for the storage related information and other ways of implementing the storage related information in the broadcast stream are possible. </span><br style="text-align: -webkit-auto;" /><br style="text-align: -webkit-auto;" /><span style="background-color: white; text-align: -webkit-auto;">Furthermore, the term "comprising" does not exclude other elements or steps, the terms "a" and "an" do not exclude a plurality and a single processor or other unit may fulfill the functions of several of the units or circuits recited in the claims. </span><br style="text-align: -webkit-auto;" /><br style="text-align: -webkit-auto;" /><span style="background-color: white; text-align: -webkit-auto;">The invention may be summarised as a method (1) of transmitting interactive television, such as MHP, whereby interactive television applications are transmitted inside application-modules, preferably DSMCC-modules. These modules are transmitted in a broadcast stream. Recording systems for interactive television cannot decide which modules are to be recorded. Therefore storage related information of said modules is signalled in the broadcast stream. Module identification information is implemented in the Application Information Table (AIT) and/or in the Download Information Indication (DII) message. Thus information is included in the broadcast stream concerning categories stating whether application modules are mandatory, optional or forbidden to record. Alternatively properties of a module are chosen from Code/Data/Both and/or Fixed/Variable. Recording systems use this information do decide if application modules are to be recorded or disregarded. Alternatively, application module identification information is transmitted in said broadcast stream. A module identification number is used to avoid multiple recordings. Application modules having the same category are preferably grouped together. Storage space on recordable media for interactive television is thus used more efficiently and recording of the modules is generally faster. </span><br style="text-align: -webkit-auto;" /><br style="text-align: -webkit-auto;" /><center><b>* * * * *</b></center><hr style="text-align: -webkit-auto;" />
<center style="text-align: center;"><iframe bordercolor="#000000" frameborder="0" height="640" hspace="0" marginheight="0" marginwidth="0" scrolling="no" src="http://ad.doubleclick.net/adi/N7433.148119.BLOGGEREN/B6627866.205;sz=640x640;ord=[timestamp]?;lid=41000000026530730;pid=57237;usg=AFHzDLvL2msulD3Y-CaCI3wCZfMHMfPSSw;adurl=http%253A%252F%252Fwww.abt.com%252Fproduct%252F57237%252FApple-MD059LLA.html;pubid=548750;imgsrc=http%3A%2F%2Fcontent.abt.com%2Fmedia%2Fimages%2Fproducts%2FBDP_Images%2Fbig_MD057LLA.jpg;width=344;height=640" vspace="0" width="640"></iframe></center></div>
Kevin Andrew Woolseyhttp://www.blogger.com/profile/01268449682429697653noreply@blogger.com0tag:blogger.com,1999:blog-3776716555337472667.post-33235927082983513092012-06-03T20:53:00.001-07:002012-06-03T20:53:51.802-07:00A system for the delivery of video on demand (VOD)<div dir="ltr" style="text-align: left;" trbidi="on">
<br />
<center><table><tbody>
<tr><td align="LEFT" width="50%"> </td><td align="RIGHT" valign="BOTTOM" width="50%"><br /></td></tr>
</tbody></table>
</center><table><tbody>
<tr><td align="LEFT" width="50%"><b>United States Patent</b></td><td align="RIGHT" width="50%"><b><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO1&Sect2=HITOFF&d=PALL&p=1&u=%2Fnetahtml%2FPTO%2Fsrchnum.htm&r=1&f=G&l=50&s1=8,191,101.PN.&OS=PN/8,191,101&RS=PN/8,191,101#h0" name="h1"></a><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO1&Sect2=HITOFF&d=PALL&p=1&u=%2Fnetahtml%2FPTO%2Fsrchnum.htm&r=1&f=G&l=50&s1=8,191,101.PN.&OS=PN/8,191,101&RS=PN/8,191,101#h2"></a><b><i></i></b>8,191,101</b></td></tr>
<tr><td align="LEFT" width="50%"><b>Baran , et al.</b></td><td align="RIGHT" width="50%"><b>May 29, 2012</b></td></tr>
</tbody></table>
<hr style="text-align: -webkit-auto;" />
<span style="text-align: -webkit-auto;">Packet timing method and apparatus of a receiver system for controlling digital TV program start time </span><br style="text-align: -webkit-auto;" /><br style="text-align: -webkit-auto;" /><center><b>Abstract</b></center><div style="text-align: -webkit-auto;">
A system for the delivery of video on demand (VOD). A wireless remote control device generates keystroke signals for controlling a TV display and has a single button for restarting a selected program at a beginning of the selected program. A head-end unit supports separate downstream virtual channels for each separate TV set connected on a common TV feeder-cable; The head-end unit locally records and stores many programs, and transmits each program using a compressed digital format. The Compressed digital format may use MPEG-2 or MPEG-4. The head-end unit has means for protecting against signal theft. A set top unit encapsulates the keystroke signals and transmits the keystroke signals via a two-way channel to the head-end unit.</div>
<hr style="text-align: -webkit-auto;" />
<table><tbody>
<tr><td align="LEFT" valign="TOP" width="10%">Inventors:</td><td align="LEFT" width="90%"><b>Baran; Paul</b> (Atherton, CA)<b>, Lin; Xu Duan</b> (Newark, CA)<b>, Pickens; John</b> (Newark, CA)<b>, Field; Michael</b> (Redwood City, CA)</td></tr>
<tr><td align="LEFT" valign="TOP" width="10%">Assignee:</td><td align="LEFT" width="90%"><b>Aurora Networks, Inc.</b> (Santa Clara, CA) </td></tr>
<tr><td align="LEFT" nowrap="" valign="TOP" width="10%">Appl. No.:</td><td align="LEFT" width="90%"><b>12/562,001</b></td></tr>
<tr><td align="LEFT" valign="TOP" width="10%">Filed:</td><td align="LEFT" width="90%"><b>September 17, 2009</b></td></tr>
</tbody></table>
<hr style="text-align: -webkit-auto;" />
<center><b>Related U.S. Patent Documents</b></center><hr style="text-align: -webkit-auto;" />
<td< style="text-align: -webkit-auto;" td=""></td<><td< style="text-align: -webkit-auto;" td=""></td<><td< style="text-align: -webkit-auto;" td=""></td<><td< style="text-align: -webkit-auto;" td=""></td<><td< style="text-align: -webkit-auto;" td=""></td<><table><tbody>
<tr><td width="7%"></td><td></td><td></td><td></td><td></td><td></td></tr>
<tr><td align="left"></td><td align="center"><b><u>Application Number</u></b></td><td align="center"><b><u>Filing Date</u></b></td><td align="center"><b><u>Patent Number</u></b></td><td align="center"><b><u>Issue Date</u></b></td></tr>
<tr><td align="center"></td><td align="center">11141693</td><td align="center">May., 2005</td><td align="center"></td><td align="center"></td></tr>
<tr><td align="center"></td><td align="center">10328868</td><td align="center">Dec., 2002</td><td align="center"></td><td align="center"></td></tr>
<tr><td align="center"></td><td align="center">60382174</td><td align="center">May., 2002</td><td align="center"></td><td align="center"></td></tr>
<tr><td align="center"></td><td align="center">60344283</td><td align="center">Dec., 2001</td><td align="center"></td><td align="center"></td></tr>
<tr><td align="center"></td></tr>
</tbody></table>
<hr style="text-align: -webkit-auto;" />
<div style="text-align: -webkit-auto;">
</div>
<table><tbody>
<tr><td align="LEFT" valign="TOP" width="40%"><b>Current U.S. Class:</b></td><td align="RIGHT" valign="TOP" width="80%"><b>725/118</b> ; 725/88; 725/90</td></tr>
<tr><td align="LEFT" valign="TOP" width="40%"><b>Current International Class:</b></td><td align="RIGHT" valign="TOP" width="80%">H04N 7/173 (20110101)</td></tr>
<tr><td align="LEFT" valign="TOP" width="40%"><b>Field of Search:</b></td><td align="RIGHT" valign="TOP" width="80%">725/118,88,90</td></tr>
</tbody></table>
<br />
<hr style="text-align: -webkit-auto;" />
<center><b>References Cited <a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2Fsearch-adv.htm&r=0&f=S&l=50&d=PALL&Query=ref/8191101">[Referenced By]</a></b></center><hr style="text-align: -webkit-auto;" />
<center><b>U.S. Patent Documents</b></center><table><tbody>
<tr><td width="33%"></td><td width="33%"></td><td width="34%"></td></tr>
<tr><td align="left"><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&d=PALL&RefSrch=yes&Query=PN%2F5640388">5640388</a></td><td align="left">June 1997</td><td align="left">Woodhead et al.</td></tr>
<tr><td align="left"><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&d=PALL&RefSrch=yes&Query=PN%2F5650994">5650994</a></td><td align="left">July 1997</td><td align="left">Daley</td></tr>
<tr><td align="left"><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect2=PTO1&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-bool.html&r=1&f=G&l=50&d=PALL&RefSrch=yes&Query=PN%2F5805602">5805602</a></td><td align="left">September 1998</td><td align="left">Cloutier et al.</td></tr>
<tr><td align="left"></td></tr>
</tbody></table>
<i style="text-align: -webkit-auto;">Primary Examiner:</i><span style="background-color: white; text-align: -webkit-auto;"> Goodarzi; Nasser </span><br style="text-align: -webkit-auto;" /><i style="text-align: -webkit-auto;">Assistant Examiner:</i><span style="background-color: white; text-align: -webkit-auto;"> Rabovianski; Jivka </span><br style="text-align: -webkit-auto;" /><i style="text-align: -webkit-auto;">Attorney, Agent or Firm:</i><span style="background-color: white; text-align: -webkit-auto;"> </span><coma style="text-align: -webkit-auto;">Bruckner PC; John<br /><hr />
<center><b><i>Parent Case Text</i></b></center><hr />
<br /><br />RELATED PATENT APPLICATIONS<br /><br />This application is a continuation of U.S. application Ser. No. 11/141,693, filed May 31, 2005, by inventors Paul Baran, Xu Duan Lin, John Pickens and Michael Field, entitled "PACKET TIMING METHOD AND APPARATUS OF A RECEIVER SYSTEM FOR CONTROLLING DIGITAL TV PROGRAM START TIME", which is a divisional application of U.S. patent application Ser. No. 10/328,868, filed Dec. 23, 2002, entitled "METHOD AND APPARATUS FOR VIEWER CONTROL OF DIGITAL TV PROGRAM START TIME", by inventors Paul Baran, Xu Duan Lin, John Pickens and Michael Field, which claims priority to, U.S. Provisional Application No. 60/382,174, filed May 21, 2002, entitled "METHOD AND APPARATUS FOR VIEWER CONTROL OF DIGITAL TV PROGRAM START TIME", by inventors Paul Baran, Xu Duan Lin, John Pickens And Michael Field, and U.S. Provisional Application No. 60/344,283, filed Dec. 27, 2001, entitled "MINI STE TOP BOX", by inventor Paul Baran.<hr />
<center><b><i>Claims</i></b></center><hr />
<br /><br />We claim:<br /><br />1. A method of producing low-cost video storage and delivery system that manages jitter in a cost effective manner, the method including: at a storage unit, receiving a plurality of Ethernet encapsulated blocks of MPEG transport packets that belong to a program stream, and storing at least part of the packets in storage locations across a plurality of disks that correspond to arrival times of the packets that belong to the program stream; retrieving the packets from the storage locations across the plurality of disks and sending the Ethernet encapsulated packets to an EdgeQAM unit at times based on the storage locations that correspond to the arrival times of the packets at the storage unit; at the EdgeQAM unit, receiving the Ethernet encapsulated packets, buffering at least the MPEG transport packets, and finding in the MPEG transport packets a plurality of PCR time stamps; using a first reference clock of the EdgeQAM, scheduling the MPEG transport packets for transmission at times relative to the PCR time stamps and the first reference clock, without recovering an encoder clock used to generate the PCR time stamps; and transmitting the MPEG transport packets via a QAM channel to a receiving device.<br /><br />2. The method of the preceding claim, further including transcoding the PCR time stamps from a first time base of the encoder clock that operated at a first frequency to the second time base of a second reference clock operates at a second frequency that is substantially different from the first frequency and that matches a frequency requirement of the QAM channel.<br /><br />3. The method of claim 1, wherein said storage unit receives said transport streams via a Gigabit Ethernet interface.<br /><br />4. The method of claim 1, wherein said EdgeQAM unit includes receiving single program transport streams and generating multi program transport stream.<br /><br />5. The method of claim 1, wherein said EdgeQAM unit can provide up to 128 QAM channels.<br /><br />6. The method of claim 1, wherein the received Ethernet encapsulated packets arrive asynchronously at the EdgeQAM unit.<br /><br />7. The method of claim 1, wherein the PCR time stamped transport packets are synchronously sent out of the said EdgeQAM unit to said receiving device.<br /><br />8. A system for managing jitter in a cost effective manner and producing low-cost video storage and delivery of program data, that operates on MPEG transport packets from in which an encoder has occasionally inserted a program clock reference (PCR) timing field, the system including: a storage unit including at least one processor, a storage medium coupled to the processor that includes a plurality of disks, a receive-and-store module running on the processor adapted to store at least part of Ethernet encapsulated blocks of MPEG transport packets to the storage medium at locations based upon the arrival times of the Ethernet encapsulated packets at the storage medium, an Ethernet adapter coupled to the processor, and a retrieve-and-send module running on the processor adapted to retrieve the packets from storage locations within the storage medium based upon the arrival times of the Ethernet encapsulated packets, and to transmit the retrieved Ethernet encapsulated packets across the Ethernet adapter; an EdgeQAM unit including at least one processor, a packet buffer coupled to the processor, an Ethernet adapter coupled to the processor and in communication with the storage unit to receive the Ethernet encapsulated packets, a QAM transceiver coupled to the processor and to a network, a packet scheduler module running on the processor adapted to find PCR time stamps in the MPEG transport packets within the Ethernet encapsulated packets, and to transmit the MPEG transport packets across the QAM transceiver at times corresponding to the PCR time stamps.<br /><br />9. The system of claim 8, wherein the EdgeQAM further includes a transcoder coupled to the processor that adjusts the PCR time stamps from a first time base corresponding to an encoder that applied the PCR time stamps to a second time base corresponding to a frequency at which the QAM transceiver operates.<br /><br />10. The system of claim 8, wherein the EdgeQAM further includes a multiplexer that combines a plurality of single program transport streams into one or more MPEG multi program transport streams.<hr />
<center><b><i>Description</i></b></center><hr />
<br /><br />BACKGROUND OF THE INVENTION<br /><br />This invention relates to digital cable television, and more particularly to placing viewer in command of when the viewer chooses to watch a television channel from the beginning of a program.<br /><br />DESCRIPTION OF THE RELEVANT ART<br /><br />Digital television (TV), when first adopted by the satellite broadcasters, offered a better quality picture and a larger number of channels, compared with convention broadcast TV or cable TV. The digital TV posed a competitive threat to the cable operators. Now, cable also is moving to digital TV, to match satellite delivery performance. Digital TV poses a major concern to the program content providers as the transmitted digital TV in the clear matches the quality of the original master, permitting ready high quality counterfeiting. While the digital signal for digital TV is transmitted in an encoded fashion, the decrypted digital signals are available on the printed circuit board of the legacy set-top units. To address this issue, the content industry formally is demanding to limit access to the digital signal in the clear--Digital Rights Management (DRM), otherwise the content of the digital signal will not be distributed as early nor as widely as for protected systems.<br /><br />A pacing factor in the inevitable evolution of cable to an all-digital environment is the cost of the set-top unit. Current set-top units are expensive, especially in the United States. With a set-top unit required by each TV set, that displays digital signals, the cable industry faces an overly expensive investment unless an alternative is provided.<br /><br />By the end of 2002, about 95% of the cable transmission in today's cable plant will be two-way. Over half of the homes passed by high speed cable are connected by cable systems operating at 550 MHz or above. Fiber cable increasingly will be used to connect clusters of users, decreasing in size over time to under 500 homes. Digital processing and storage are continuing historic price declines. Present MPEG-2 digital transmission can carry about 10 times as many channels per 6 MHz TV channel, as conventional analog transmission. MPEG-4 and other compression schemes soon may double this number in the longer-term future. This could make telephone Digital Subscriber Loop (DSL) economically attractive for competitive carriage of digital TV.<br /><br />Customer demands are shifting. Digital transmission quality increasingly will be necessary to compete with satellite. More TV signals will be needed to match satellite's delivery capability. The desire for Video on Demand (VOD) is increasing, and now is an economically viable premium service for the cable operator. Time shifting technologies are able to charge $10 per month for service. Cable operators pay a far greater cost for set-top units than satellite TV providers; in part, because of the duopoly supplier situation in the U.S. prevents adequate competition. Satellite growth rates exceed that of cable.<br /><br />Each TV user is capable of having their own channel. An 860 MHz cable system can carry about 128, 6 MHz analog TV channels. The cable system equivalently could carry about 1280, MPEG-2 channels. Since the expected number of houses on a single fiber is expected to decline to about 500, there is a surplus of channel capacity to provide each TV set with its own two-way channel to the cable head end.<br /><br />SUMMARY OF THE INVENTION<br /><br />A general object of the invention is to deliver new services to a viewer, which new services are not viable with satellite, either technically or financially, at a cost much less than that of current per-household costs, including set-top boxes and head-end infrastructure.<br /><br />Another object of the invention is a demand TV system which uses existing plant without change, and co-exists with currently deployed cable system elements.<br /><br />According to the present invention, as embodied and broadly described herein, a method and apparatus for viewer control of digital TV program start time is provided. The apparatus includes an input unit, a storage unit with a disk array, a gigaQAM unit, a controller, and a conditional access unit. The input unit receives digital channels from multiple constant bit rate and variable bit rate sources, e.g. HITS, Microwave, Satellite, RF Broadcast, and local video processing sources. The storage unit provides both capture and playout of real time video streams. In addition, files can be stored and played out on demand, i.e. video-on-demand is one application of this feature. The disk array contains both disk controllers and a large number of IDE disk drives. The gigaQAM unit receives MPEG encoded digital channels over a 10 Gigabit Ethernet interface from the storage unit or other source. The gigaQAM unit routes and duplicates each digital channel. Program information is inserted. PCR correction is performed and null frames are inserted. The Controller 800 manages the switching set up of information pathways from the Input unit 400, Storage unit 300 and gigaQAM unit 200. It constructs valid Program Specific Information (PSI) and System Information (SI) for the output multiplexes created at the gigaQAM unit. The controller manages the switching set up of information pathways from the input unit, Storage unit and gigaQAM unit. The controller constructs valid Program Specific Information (PSI) and System Information (SI) for the output multiplexes created at the gigaQAM unit. The conditional access unit enables the operator to authorize subscriber access to digital program streams. The conditional access unit offers a cryptographically strong conditional access solution augmented with physical mechanisms to enhance security.<br /><br />A video server divides the input video-streams up as they each separately enter the system and then sends the divided video-streams to a server that has a simple switch as it's first component. Each divided video stream is identified with a separate Ethernet address. A switch routes the appropriate piece of the video stream to the appropriate disk drive. When replaying the video stream the disk drive controllers co-operate together to send their portions of the video stream at the appropriate time back out through the switch. In this way the problem of a single bottleneck of a CPU and RAM is eliminated to provide a video-server in which many CPU's are working in parallel to produce a much larger number of video streams.<br /><br />Another aspect of the invention is a high density radio frequency (RF) generation and quadrature-amplitude-modulated (QAM) system is provided. The high density RF generation and QAM system is for a multi-channel digital TV system head end. The high density RF generation and QAM system comprises a signal generator, a multiplicity of low-pass filters, a first combiner, a second combiner, a converter, and a distribution subsystem.<br /><br />The signal generator generates a multiplicity of agile signals at any of 6 MHz and 8 MHz spacing. The multiplicity of agile signals are generated above the 50-860 MHz, TV band to avoid spurious signals within the 50-860 MHz, TV band. A modulator separately modulates each agile signal with digital modulation. The modulator thus generates a multiplicity of digitally-modulated signals.<br /><br />A low-pass filter or a plurality of low-pass filters low-pass filters the multiplicity of digitally-modulated signals to maintain modulated sidebands of each digitally-modulated signal within an allowed channel spectrum. A first combiner combines the multiplicity of digitally-modulated signals into a plurality of ensembles of 2, 4, 8 or 16 digitally-modulated signals, respectively.<br /><br />A converter heterodynes, with a common oscillator, each ensemble of the plurality of ensembles of digitally-modulated signals to fall within 50-860 MHz, TV band. A second combiner combines the heterodyned-plurality of ensembles of the multiplicity of digitally-modulated signals, into a plurality of groupings.<br /><br />A distribution subsystem distributes portions of the plurality of groupings of digitally-modulated signals for multiple TV distribution zones.<br /><br />Additional objects and advantages of the invention are set forth in part in the description which follows, and in part are obvious from the description, or may be learned by practice of the invention. The objects and advantages of the invention also may be realized and attained by means of the instrumentalities and combinations particularly pointed out in the appended claims.<br /><br />BRIEF DESCRIPTION OF THE DRAWINGS<br /><br />The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate preferred embodiments of the invention, and together with the description serve to explain the principles of the invention.<br /><br />FIG. 1 is a system-configuration block diagram;<br /><br />FIG. 2 illustrates a rack mount of the system for a head end;<br /><br />FIG. 3 depicts a distributed system from a head end to various subscriber clusters;<br /><br />FIG. 4 shows a centralized system of the head end to various subscriber clusters;<br /><br />FIG. 5 illustrates input processing;<br /><br />FIG. 6 shown encapsulation techniques;<br /><br />FIG. 7 is a block diagram of daughter cards for the input processing module with ASI inputs;<br /><br />FIG. 8 is a block diagram of daughter cards for the input processing module with DHEI inputs;<br /><br />FIG. 9 Shows a storage switch;<br /><br />FIG. 10 illustrates tracks and sectors;<br /><br />FIG. 11 shows usable space with uniform track size;<br /><br />FIG. 12 shows stream distribution over disks;<br /><br />FIG. 13 illustrates storage devices at the hub;<br /><br />FIG. 14 illustrates Ethernet addresses to device packets to disk;<br /><br />FIG. 15 illustrates partitioning Ethernet address;<br /><br />FIG. 16 shows an Ethernet destination address;<br /><br />FIG. 17 shows circular buffers spaced over disks;<br /><br />FIG. 18 illustrates seek to previous time in video stream;<br /><br />FIG. 19 illustrates disk request queue;<br /><br />FIG. 20 illustrates coordination of output stripes;<br /><br />FIG. 21 illustrates disk controller subsystem;<br /><br />FIG. 22 shows a gigaQAM card;<br /><br />FIG. 23 shows horizontal orientation of gigaQAM cards;<br /><br />FIG. 24 illustrates one of eight gigaQAM cards for the gigaQAM unit;<br /><br />FIG. 25 shows a logical switch element;<br /><br />FIG. 26 shows physical elements of switching card;<br /><br />FIG. 27 is a block diagram of a QAM synthesizer analog module on one card;<br /><br />FIG. 28 is a block diagram of a combiner;<br /><br />FIG. 29 illustrates conditional access to the head end;<br /><br />FIG. 30 illustrates controlled access streams;<br /><br />FIG. 31 shows three-level processing of service keys and control keys;<br /><br />FIG. 32 shows a security card;<br /><br />FIG. 33 depicts encrypted conditional access;<br /><br />FIG. 34 shows aggregation and distribution over QUAM channels;<br /><br />FIG. 35 shows a star Ethernet backplane;<br /><br />FIG. 36 shows clock synchronization between multiplexer and set-top box;<br /><br />FIG. 37 illustrates PCR correction;<br /><br />FIG. 38 illustrates scheduling period;<br /><br />FIG. 39 illustrates output multiplex scheduling;<br /><br />FIG. 40 illustrates system components using gigabit Ethernet;<br /><br />FIG. 41 illustrates time slicing input stream;<br /><br />FIG. 42 illustrates destination Ethernet address composition;<br /><br />FIG. 43 is an example of storage unit to gigaQAM streams;<br /><br />FIG. 44 shows multicast SPTS;<br /><br />FIG. 45 shows an external SPTS through Ethernet;<br /><br />FIG. 46 is an example source based stream identification;<br /><br />FIG. 47 illustrates IP encapsulated MPEG SPTS;<br /><br />FIG. 48 shows stream identification within IP packets;<br /><br />FIG. 49 illustrates addressing in various encapsulations;<br /><br />FIG. 50 shows storage units connected via RPR ring;<br /><br />FIG. 51 is a block diagram of server mother board;<br /><br />FIG. 52 shows a head-end OOB burst receivers and transmitter PCI card;<br /><br />FIG. 53 is a block diagram of a minibox; and<br /><br />FIG. 54 illustrates functional processing elements.<br /><br />DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS<br /><br />Reference now is made in detail to the present preferred embodiments of the invention, examples of which are illustrated in the accompanying drawings, wherein like reference numerals indicate like elements throughout the several views.<br /><br />Cable is the most successful last-mile technology for providing revenue-generating services to end-users, subscribers. The cable physical plant has evolved from a mostly one-way analog plant in the late 90's to today's two-way high-speed digital plant, with legacy analog support. The next major push within the cable industry is to enhance video service offerings and continue enhancing the two-way data environment. The ultimate target is the creation of the all-digital network, in which convergence of video, data, and voice and capacity are key attributes. The present invention is directed toward enabling the all-digital network with all-digital headend products, including high-speed data, Cable Modem Termination System (CMTS) and Cable Modem (CM), and voice Media Terminal Adapter (MTA). The present invention details technology and product offerings for the enhancement of video services. Video service is the bread-and-butter of the cable industry.<br /><br />Many challenges and opportunities exist as the Multiple Service Operators (MSOs) strive to continually enhance their video service, attract new customers, and retain current customers.<br /><br />Three key video networking challenges exist for the MSO: (1) Increasing the selection of video content, to compete with satellite. (2) Defining new services and service revenue. (3) Increasing the efficiency of use of the HFC spectrum (n*6 Mhz, n*8 Mhz).<br /><br />The present invention addresses these challenges by providing both a quantum improvement in density/cost for digital video and a personalized per-TV on-demand service with network-based caching and user-directed caching controls. Broadcast can be viewed live or time-delayed. Video-on-demand files can also be accessed/ordered, and individually played, paused, and rewound. The system transports only those video streams that are being actively viewed. The present invention enables an unlimited number of digital channels and digital videos to be carried over HFC infrastructure, translating to increased service revenue. Satellite cannot achieve this level of service over its one-way broadcast infrastructure.<br /><br />Deployment of a complete present invention enables the operator to fully optimize the facilities of his HFC physical plant for the deployment of revenue generating services. Since the present invention is based upon standards and standard interfaces it is also possible for the operator to deploy subsets of the complete systems solution, as shown in FIG. 1: (1) The MiniBox unit 100 as a low cost, high performance vehicle for deployment of broadcast TV, personal TV, and time shift services. (2) The GigaQAM dense modulator unit 200 for cost effective and rack-space effective delivery of video services from an IP/GIGE backbone to HFC. (3) The Storage unit 300 for use as a time shifted real time video caching function and local repository for video-on-demand MPEG files. (4) The Input unit 400 for conversion of MPEG video from ASI/DHEI to Ethernet backbone transport. (5) The MiniServer unit 500 for thin-client computing as a vehicle for deployment of new applications while future proofing the MiniBox or other deployed settop box CPE equipment. (6) The CMTS/INA 600 for an out-of-band (OOB) communication channel for digital settop boxes and concurrent offering of data and voice services. The storage unit 300 may employ a disk array 700. The system is controlled by controller 800.<br /><br />One embodiment of the present invention is a collection of units, plug-in modules, and software functions that enable a wide variety of video processing options and configurations for the MSO. The present invention may be deployed either in a centralized network configuration, collocated with encoders and digital receivers, or in remote hub locations. FIG. 1 details each of the units and shows their interconnectivity. Each unit fits within a standard 19 inch rack mounted chassis, as shown in FIG. 2. The units range from 1 U to 5 U (a U equals 1.75 inches of a 19 inch rack mounted panel) and depth up to 26 inches. Up to two complete 1280 channel systems, minus Input, can be fit into a standard seven feet high equipment rack.<br /><br />Headend and System Overview<br /><br />System Headend Physical Configurations are designed to be rack mounted. A typical configuration for a hub with 1280 subscriber digital channels, with a Gigabit Ethernet link for broadcast video, is shown in FIG. 2. The 1280 subscriber digital channels, a typical number of subscriber digital channels, assumes 128 QAM channels, with ten digital channels per QAM channel. The total number of streams could be much higher. For example, twenty-five 2 Mbps channels per QAM at 8 MHz, gives 3200 subscriber digital channels. Up to two combinations of Storage/gigaQAM units can be installed in a seven-foot high rack for a total capacity of 2560 subscriber digital channels. At a take-rate of 1:5 for advanced Retrovue time-shifted viewing service, a single rack can support 12,800 digital video subscribers.<br /><br />System in End-to-End Network redefines cable and backbone network architecture for delivery of broadcast-on-demand and video-on-demand service.<br /><br />Legacy cable network architectures for video, audio and video, are inflexible tree topologies. Video program streams, audio/video, are collected from multiple sources, encoded analog and digital, in digital MPEG format at the headend. Individual program streams are selected based upon management and provisioning configuration information for delivery to subscribers. The selected video streams are then groomed for delivery over narrowband transport pipes over branches, selection, multiplexing, rate shaping. From the root narrowband transport streams are delivered over a fixed bandwidth narrowband interface to narrowband tributaries. The delivery methods vary from local delivery across short-distance cable industry standard interfaces, e.g. ASI, to local HFC modulators to long-distance encapsulation in constant bit rate synchronous delivery vehicles such as SONET.<br /><br />Legacy cable network architectures have the following issues: (1) Low backbone networking flexibility. (2) Does not leverage the cost and functionality curves for Ethernet and IP. (2) Limited equipment supplier options. (3) Limited network topologies.<br /><br />The present invention, in contrast, leverages IP and Ethernet wideband technologies for distribution of video within the core. High speed communications technologies utilizing internetworking protocols can be utilized to drive cost lower, carry multiple services over a single converged networking infrastructure, and leverage the technology innovation curve for backbone equipment. Because every subscriber has his own personal TV channel the operator can economize on configured bandwidth and deliver just that content that the subscribers are viewing. Alternate approaches require all channels to be broadcast at all times, and therefore require bandwidth to be over-provisioned.<br /><br />FIG. 3 is an overview of a distributed end-to-end system. In the distributed network configuration the input unit is located in one or more centralized locations and the Server is located in remote hubs.<br /><br />Other configurations are also possible. FIG. 4 shows a centralized configuration that reflects a type of topology often seen in legacy physical plants. The present invention is flexible enough to support either a centralized or distributed topology, or combinations of centralized and distributed.<br /><br />Key Assumptions<br /><br />Key assumptions driving the design of the present invention are: (1) Input capacity is approximately 150 digital channels and 150 video-on-demand files. (2) The nominal digital channel rate is 4 Mbps per channel (audio and video) (3) Digital channels streams are stored and processed as combined Audio/Video elemental MPEG streams together in memory and on disk. Each digital channel is stored independently on disk. (4) Digital channels are received in the clear, not a requirement if encryption scheme is not sensitive to time shifting. (5) Modulator output scales from 16 QAM channels to 128 QAM channels depending upon configuration. At 256QAM (40 Mbps) each 6 MHz QAM channel carries 10 digital channels. Thus the output capacity of the system scales from 160 digital channels to 1280 digital channels. (6) 64QAM, 256QAM, 512QAM, and 1024QAM modulations are supported. (7) Server components are serviceable and swappable while live with no disruption of service.<br /><br />Input Unit<br /><br />As illustratively shown in FIG. 1, the Input Unit 400 receives digital channels from multiple constant bit rate and variable bit rate sources, e.g. HITS, Microwave, Satellite, RF Broadcast, and local video processing sources. Each source is received as a digital transport stream through a common interface for processing digital transport streams such as ASI. These digital channels are selected and multiplexed together into an MPEG-over-IP/Ethernet packetized transport stream. The transport stream is multicast using Ethernet technology to downstream video units.<br /><br />The Input Unit 400 can connect to the Storage Unit 300, gigaQAM Unit 200, or other video units with Ethernet input options. On the order of 150 digital channels can be processed.<br /><br />A single input unit 400 can serve multiple Storage/gigaQAM units or more than one Input Unit 400 may be configured if input were desired from more than one location. These extensions are not shown in the figure.<br /><br />The Input Unit 400 has a variety of interface and processing options: (1) Video interfaces--Asynchronous Serial Interface (ASI), Digital Head-End Interface (DHEI); (2) Decryption of video content if encrypted; (3) Remove null frames to decrease the data rate; (4) Mapping of individual digital channels to one or more MPEG-IP/Ethernet transport streams. (5) transmission of MPEG-IP/Ethernet transport streams via unicast and multicast service over Ethernet interface, future IEEE Standard 802.17 Resilient Packet Ring and Sonet.<br /><br />The input unit 400, as shown in FIG. 5, and as controlled by control processor 425, performs the following operations: (1) ASI to Ethernet encapsulation 412; (2) DHEI to Ethernet encapsulation 412; (3) Scrambling 414; and (4) Aggregation 416. The diagram in FIG. 5 outlines the system. In this diagram the "Switch" element is used to aggregate input from many streams into Ethernet output interfaces. The device does not have to handle a full load from the ASI or DHEI ports as these interfaces usually supply data at either a 27 Mb/S or 38.8 Mb/S rate.<br /><br />Although the Ethernet switching device is capable of filtering it is desirable that the Encapsulation logic filter 421 unnecessary packets before transmission to the Ethernet switch to reduce the bit rate to be processed and in the scrambling stage.<br /><br />In order to measure jitter through the switching network each transport packet is tagged with a timestamp 418.<br /><br />In order to reduce encapsulation and interrupt overhead, several MPEG transport packets are collected together before encapsulation in Ethernet and IP frames. A first-in-first-out (FIFO) memory 416 allows up to 7 MPEG transport packets plus timestamps 418 to be accumulated before encapsulation 412. Data accumulated in the FIFO memory 416 should be flushed after aging for a maximum of 90 mS.<br /><br />For each selected PID in the input transport multiplex an encryption key and cipher feedback state should be maintained. The framing device should also set the appropriate value in the scrambling control bits in the MPEG transport header to indicate whether and odd or even key is currently in use.<br /><br />Each MPEG transport stream packet received at the ASI or DHEI interface is encapsulated into an IP Ethernet packet.<br /><br />FIG. 6 shows encapsulating techniques.<br /><br />Occasionally an even larger number of physical input sources will be present although the total number of programs or input rate will not increase. This situation is likely to occur when a large number of inputs from single channel encoders need to be combined. In such a situation, Input Units possibly can be cascaded through a spare Gigabit Ethernet port. Notice that this requires the implementation of a bi-direction Gigabit Ethernet port. In normal operation the ASI and DHEI ports need only be unidirectional.<br /><br />For the Input Aggregation Module (IAM), there are 8 daughter cards to support all 32 multi-program transport stream (MPTS) inputs processing. Each daughter card after filtering delivers some portion of it's input transport packets encapsulated in Ethernet frames to the switching device on the motherboard. MPTS inputs to the daughter card are delivered by 4 ASI or 4 DHEI interfaces. The on card ASI or DHEI receivers send the MPTS to the FPGA for the input processing.<br /><br />A XILINX FPGA Virtex-II XC2V6000, by way of example, can be used for the MPTS inputs processing, which includes single program transport stream filtering, transport stream decryption, time re-stamping and Ethernet frame encapsulation. The encapsulated SPTS is delivered, by way of example, to the IAM BCM5632 based GE switching card through BCM8002 1 GE transceiver. The daughter card can direct access the GE switching card CPU and memory through a PCI interface. The PCI interface also provides all required input processing card control and the power supply.<br /><br />FIG. 7 shows the block diagram of one IAM input processing daughter card with ASI input interfaces. FIG. 8 shows the block diagram of one IAM input processing daughter card with DHEI input interfaces.<br /><br />The key components, for example, on the daughter card may include: One 1 GE transceiver IC BCM8002; One FPGA Virtex-II XC2V6000; Four Cypress ASI receiver chipset, or 4 DHEI Receiver chipset; PCB with PCI interface and power module.<br /><br />Storage Unit<br /><br />The Storage Unit 300, also called the Caching unit, provides both capture and playout of real time video streams. In addition, files can be stored and played out on demand, i.e. video-on-demand is one application of this feature.<br /><br />For broadcast video the Storage Unit 300 receives one or more MPEG-over-IP/Ethernet packetized transport streams via a Gigabit Ethernet interface from one or more Input Units 400, or equivalent. Gigabit Ethernet can carry from 200 to 300 digital channels, at 3 Mbps to 4 Mbps. The targeted capacity, however, is 150 channels. The transport streams are separated into individual program streams, also called digital channels, and the streams are spooled onto disk storage. Sufficient disk storage is available to store an average of two or more live hours for each and every digital channel, although the storage allocated per channel is configurable.<br /><br />The Storage Unit 300 also stores a collection of video files for a Video-on-Demand (VOD) play out service. If the requested video were located on local storage, then the requested video is played out in real time with the same sorts of processing, encryption, and time-shifted controls offered for live broadcast channels. If the requested video were not on local storage, then the control function within the Server dynamically requests that the video be delivered from an external video library. The external video is distributed over the GIGE backbone into local disk storage via unicast or multicast file transfer. The external video may originate from another Storage Unit 300 or a video-server provided by a 3rd party supplier.<br /><br />The Storage Unit 300 contains one or two Disk Array modules that are connected to the main chassis via Gigabit Ethernet.<br /><br />The Storage Unit 300 de-spools digital channels and video-on-demand files from disk and transmits the digital channels as Ethernet frames over a 10 Gigabit Ethernet interface to the gigaQAM Unit.<br /><br />The storage unit 300 of the present invention is responsible for accepting input for N MPEG-2 program streams and creating M output streams. The initial target is for N=150 MPEG-2 programs at a peak rate of 4 Mbits/S each and for M=1280 output MPEG-2 program streams at a peak rate of 4 Mbits/S each, as shown in FIG. 9. The input and output bandwidth managed by this device in Mega-bytes per second is therefore:<br /><br />TABLE-US-00001 Total Streams Rate MB/S MB/S Input 150 0.5 75 Output 1280 0.5 640 Total 715<br /><br />The data flow has several interesting characteristics: Output data flow is significantly higher than input; Video data can be stored and retrieved in a sequential fashion; All output streams might be derived from a single input stream; Video data cannot be delivered late. Since the sustained data transfer rate from a single disk drive is much lower than the required data transfer rate it is clear that the load must be spread across several disk drives in a way that ensures a uniform distribution of the load.<br /><br />Most disk drives have a stack of disk platters with a single voice-coil actuated arm moving a set of heads that read or write data on the platters. Typically only one of the heads can be active at a time. The following picture depicts an IBM drive, typical of those available.<br /><br />Data, as shown in FIG. 10, are store on concentric tracks. The recording density is the same across the whole surface, therefore tracks at outer edge of the disk contain a larger amount of data than track near to the center.<br /><br />Calculating the data transfer rate that can be expected from a single disk depends upon three factors: (1) The number of sectors per track; (2) The seek time--time spent moving the head to a new track; and (3) The rotation speed of the disk.<br /><br />Reading all the data from a track during a single rotation of the disk is preferable in order to achieve a high transfer rate per read operation. If possible, reading the same track number on another of the surfaces can be used to improve overall transfer rate.<br /><br />Each of the factors effecting transfer rate improves with each generation of disk drives. At present, an average transfer rate of approximately 30 M bytes per second is possible.<br /><br />In order to simplify the management of disks it is desirable to transfer fixed amounts of data to the disk. The disk has a variable track size depending on its distance from the center of the disk we can create a uniform track size by ignoring small tracks at the center of the disk and not filling larger tracks at the edge of the disk. Fixed sized tracks, however, could simplify tracking of disk space and improve read/write time. FIG. 11 depicts how much disk space is available if this technique is used for various track sizes for a particular IBM disk drive. Using information on the number of sectors available in each track we can determine the optimal fixed track size to use to maximize disk space. However, because track size in part determines data transfer rate it is also useful to calculate the optimum fixed track size that will yield just enough overall disk capacity to meet the Retrovue requirements for 150 programs stored for 2 hours at an average bit rate of 4 Mb/S (540 GB.)<br /><br />In order to minimize the cost of the Retrovue storage unit 300, a minimal number of disk drives are used. This means the minimum requirements are met for storage capacity and for data transfer rate. Determining this value for a particular disk drive model is quite straightforward but requires knowledge of the performance characteristics and geometry of the disk drive in question.<br /><br />Retrovue requires that all output MPEG2 program streams may be derived from a single input program (a data rate of 640 MB/S). Since the maximum output data rate from a single disk is approximately 64 MB/S Taking advantage of the relatively constant rate of digital video and the sequential nature of access to the data the design outline in FIG. 12 is used to distribute an incoming stream across an array of disks. In this diagram data collected over a fixed time period is transferred to a particular disk. In the next fixed time period the same amount of data is collected and transferred to the next disk in the array. This process continues in a round-robin fashion until all disks have been used, then first disk is re-used. In order to keep transfers to the disk efficient the transfer to the disk is a multiple of the track size of the disk.<br /><br />The video server of this invention divides the input video-streams up as they each separately enter the system and then sends the divided video-streams to a server that has a simple switch as it's first component. Each division is identified with a separate Ethernet address. A switch routes the appropriate piece of the video stream to the appropriate disk drive. When replaying the video stream the disk drive controllers co-operate together to send their portions of the video stream at the appropriate time back out through the switch. In this way the problem of a single bottleneck of a CPU and RAM is eliminated to provide a video-server in which many CPU's are working in parallel to produce a much larger number of video streams. FIGS. 13 through 21 describe this process.<br /><br />In the present invention, a single input point, as shown in FIG. 13, is desirable for all video streams. This input point usually is located at a cable headend and fed from a variety of satellite sources and other transport multiplexes from local encoders. The selected input streams are then distributed to hub level storage and QAM generation complexes.<br /><br />An obvious way to distribute each video stream, MPEG2 Single Program Transport Multiplex, over the connecting gigabit Ethernet is to assign a single multicast Ethernet Media Access Control (MAC) address or multicast IP address to each stream so they can be differentiated at the storage box and assigned to the correct set of tracks on each disk. The assignment of an address requires the storage unit to demultiplex each stream from the gigabit stream and assign portions of that stream to each disk.<br /><br />Another method of directing the appropriate portions of the video stream to the appropriate disk is to have the Input Unit 400 assign the destination disk and encode that destination disk address as part of the destination Ethernet address for a chunk of the input video stream. FIG. 14 depicts this process with a packet of video directed to a disk. Note that using this scheme the destination Ethernet address is used to direct a switch to forward the packet to a particular disk. In addition the source address can be used to identify the MPEG2 program to which the packet belongs. This scheme has the advantage that a simple Ethernet switch can be used to demultiplex the incoming stream and direct it to the correct disk very economically.<br /><br />The Storage Unit/Input Unit Interaction provides a large number of video streams required in Retrovue a special video-server has been devised. In the prior art, a conventional video-server uses a number of disk drives to store a single video stream, often in a RAID array to provide protection against a disk failure. A single processor is used to assemble data from the collection of disks and provide an MPEG-2 SPTS (Single Program Transport Multiplex) for each required output stream. In this prior-art arrangement the single processor and its RAM inevitably become a bottleneck through which all streams must pass.<br /><br />Using the scheme described above to direct packets to the appropriate disk controller is very powerful but has a drawback. In FIG. 13 a single input unit 400 is shown broadcasting, or multicasting, data to several hubs that contain Storage units 300. If the scheme suggested above were used, then it will work perfectly if each storage module had the same number of disks and the usable track size is the same. In practice it is desirable to allow each of the storage modules to be of a different size, possibly because disk technology has improved since the installation of early modules to late modules.<br /><br />Building on the previous scheme we use multiple Ethernet destination addresses for packets belonging to a data stream. The Ethernet switch at the storage unit is configured to partition groups of Ethernet addresses, as shown in FIG. 15, to direct them to different disks in accordance with the track size for disks used in that particular storage module.<br /><br />One simple analogy that may help to explain this concept is that of a beer bottling plant that creates a conveyor belt with a long line of bottles that are the same size. At the end of the conveyor belt is a machine that fills up a box full of bottles. This machine will grab six bottles and pack them into a box before moving on to another box. If the plant manager decided that a new type of box containing twelve bottles were to be produced, then it is a simple matter to switch box sizes and the adjustment of the packing machine. The individual size of bottles does not have to change even though the new package size contains twice as much beer.<br /><br />For each hub-storage module in the network the switch at the hub is configured to direct a collection of destination addresses to the appropriate disk.<br /><br />Given a peak bit-rate, it is possible to accommodate VBR video in the storage system. The input unit 400 is responsible for forwarding MPEG packets arriving within a time period even if the encapsulating Ethernet packet is not filled. The easiest option for the storage unit is to accumulate data for the a period of time equal to the average bit-rate for the track size of the disk and then output that data to the track even if the track is only partially filled.<br /><br />In order to direct incoming packets to the correct location on the correct disk the 48 bit Ethernet destination addresses is used. Destination address is encoded as shown in FIG. 16. This allocates 11 bits (SSSSSSSSSSS) to identify the source MPEG-2 single program transport stream identifier and 12 bits (DDDDDDDDDDDD) to segment the stream, groups of these segments are directed to a particular disk.<br /><br />In order to create thousands of output video streams from a single input stream the stream must be striped across a number of disks so seek and transfer time do not become a bottleneck.<br /><br />Since a collection of circular buffers distributed across a collection of disks is created, as illustrated in FIG. 17, several output streams can be generated from those buffers. Notice that given knowledge of the current insertion point in the buffer and knowledge of the time represented by a buffered unit, disk track, then the disk and track can be determined that represents a particular time in the past in a video stream. In the example circular buffer in FIG. 18 the circular buffer is formed of 9 tracks spread over three disks, see FIG. 17. Blocks are labeled with the Disk and Track number. Notice that the distance to which we can seek is limited by the size of the circular buffer and the resolution is limited by the size of the disk track size and peak stored data rate. In the above example with a track size of 400,000 bytes and a peak data rate of 4 Mbits/S block D2T2 represents 1.6 seconds in the past.<br /><br />When video data, at a constant bit rate, by padding video that is below a peak rate, is striped across a collection of disks we expect that for a single stream the disk it will be drawn from will sweep across a set of disks. For example if a single drive could deliver data at approximately 160 Mb/S and a single stream were 4 Mb/S, then the drive might be expected to service 40 service requests per second.<br /><br />In order to allow a single drive to deliver the required number of streams by the required deadline the number of requests on a single drive must not exceed a certain level, in the above example 40 requests per second. To make sure that each disk is not overloaded the play-out can be delayed of a video stream until the request queue for the disk that contains the initial block of data required is below the maximum threshold for that disk. The request queue, as shown in FIG. 19, for each is balanced to the same level to allow maximum jitter in the delivery of each requested block.<br /><br />Assuming that all the disk subsystems have the same performance, the threshold can be computed for the time period T equal to data D stored on a track played at a rate of R. T=D/R<br /><br />Assuming each disk request is for a track of data the time required to satisfy the request is:<br /><br />Request=Seek Time+Latency Time+Read Time<br /><br />Seek Time varies between 0 and Smax<br /><br />Latency varies between 0 and Lmax<br /><br />Read time is fixed by the rotation speed of the disk.<br /><br />Several algorithms for re-ordering the queue of requested tracks are possible that change the order in which tracks are retrieved to minimize time wasted seeking from one edge of the disk to the other. As long as deadline for retrieving a track is maintained these algorithms can improve the overall performance.<br /><br />During ingestion of a video stream, incoming data will enter the storage box in the order the data are received and can then be distributed to the appropriate disc. During output, the playback of this data is coordinated so that stripes of data from each disk are played out sequentially. Even though the video stream is spread over several disks, the video stream is easy to compute the sequence of transfers that must take place and send this information to the logical disk drive controllers. However the disk controllers must be coordinated, for example in FIG. 20, disk 1 must complete the transfer of all packets from its track before disk 2 commences transmitting packets. In order to perform this coordination the first disk controller sends a packet to the second controller via the switch when it has completed sending information for the first track, the second disk controller in turn informs the third controller and so on. An alternative scheme involving a shared clock is potentially possible but is difficult to coordinate accurately, especially if the disk controllers are located in different chassis.<br /><br />Play-out of a video stream must begin at a certain time, for example the start of a television program. The disk track that contains that time can be calculated within the circular buffer to an accuracy of approximately a tracks worth of data, about 0.9 of a second with current disk density. This means that play-out must start on a particular disk.<br /><br />Since the number of disk requests that can be services in time T is known, play-out possibly may not be able to start immediately. In order to balance the load across all disks we delay start of play for a video stream until the request queue is below a threshold. This scheme is analogous to people waiting to get onto a moving carousel until a open space passes them, or cars waiting to get onto a roundabout.<br /><br />The switching element used to direct packets to the Disk Arrays shares a design with the GigaQAM unit 200 and input unit 400.<br /><br />Disk Array<br /><br />The disk array contains both disk controllers and a large number of IDE disk drives.<br /><br />The disk controller subsystem is responsible for converting data over a 1 Gbit/S Ethernet connection into data over 5 IDE connections. Each pair of disk drives is a unit with disk (b) backing up disk (a). Two write commands are required to mirror contents on both disks but only a single active disk is read. Due to the nature of Retrovue read transfers outnumber write transfers by about 4:1. Typical sustained IDE disk transfer rates are computed to be about 20 Mbytes/s, hence 5 IDE controllers are combined to feed a 1 Gbit/S interface.<br /><br />The disk array consists of 32 standard height drives with 4 controllers packages in a 5RU enclosure. Four copper gigabit Ethernet connectors connect the disk array to the storage/switch module.<br /><br />OAM Unit<br /><br />The gigaQAM Unit 200 receives MPEG encoded digital channels over a 10 Gigabit Ethernet interface from the storage unit or other source. Each digital channel is routed, duplicated, when directed to multiple destinations, remapped, and encrypted. Program information is inserted. PCR correction is performed and null frames are inserted. Each subscriber receives private conditional access encoding and key management. Per subscriber conditional access provides a very strong content control and access control mechanism. Each digital channel has its own timing. One subscriber can view a live digital channel while a neighboring subscriber can views the same digital channel, but perhaps delayed by 20 minutes; and yet another subscriber can have the same digital channel paused.<br /><br />The gigaQAM unit 200 receives single program transport streams and generates multi program transport streams. Modules are contained which convert the MPEG streams into QAM channels. Up to 128 QAM channels can be provided in a cost effective and rack space effective manner.<br /><br />The gigaQAM unit 200 accepts an asynchronous 10 Gbps input stream of MPEG-over-Ethernet traffic from the STORAGE unit or other upstream source and delivers 128 QAM channels to a bank of RF output connectors.<br /><br />FIG. 22 illustrates a gigaQAM card. The gigaQAM unit 200 maps into a switching card located at the end of the chassis. The gigabit switching element on gigaQAM unit card is at the center of a star network of point-to-point gigabit Ethernet connections to each of the GigaQAM cards. The switching card has 4 external ports, connected via a rear transition module to connectors on the back of the chassis. These ports are 3.times.1 gigabit Ethernet connectors and 1.times.10 gigabit Ethernet connector. Each gigaQAM card is connected to the switching element via the gigabit Ethernet carried over the cPCI mid-plane. The gigaQAM card outputs via a number of RF connections that lead from the mid-plane to a combiner board that forms the rear of the chassis.<br /><br />The gigaQAM unit 200 performs the following stream processing functions: Stream arrival is asynchronous, MPEG framing vs. output clock; Multiplexing N programs per output channel; Encryption; PSI generation for output multiplexes SI generation for inband; Auxiliary data streams; Authorizations; Self Installation; Interactive Applications; Set top firmware updates; Null packet generation; PCR correction Stream departure is synchronous.<br /><br />The conceptual flow through the gigaQAM unit 200 is as follows: Receives the asynchronous serial stream from Transport Input and Storage; Maintains a per-program-stream flow control state and buffer; Performs all stream processing functions specified under the Process step; Receives per-transport-stream and per-program-stream conditional access from Host (TBD which interface this arrives from); Multiplexes and create synchronous Transport Streams for QAM synthesis.<br /><br />In FIG. 23, the chassis contains a switching card and 8 GigaQAM cards oriented horizontally. These cards plug into a midplane which provides connectivity between the cards.<br /><br />There are 8 GigaQAM cards that support processing of all 1280 program streams. The GE switching card delivers 160 program streams to each daughter card through 1 GE interface.<br /><br />On each GigaQAM card there is one FPGA. Each gigaQAM card performs DES encryption through the control signal from PCI bus, routes the single program streams to the proper FIFO memory, and calculates the rate of delivered program streams. Each gigaQAM card also performs the 16 multiple program transport stream multiplexing in parallel, PID re-mapping and PCR correction. Then all 16 MPTS outputs are multiplexed together inside the FPGA for QAM digital processing including FEC and Nyquist filtering. FIG. 24 shows the block diagram of one processing GigaQAM card.<br /><br />The gigaQAM unit 200 includes: (1) One GE switching card with BCM5632 and IDT processor. (2) One processing and modulator card with one 6 Mgates FPGA Virtex-II XC2V6000, 8 16 M.times.16 SDRAMs, 16 DACs, 16 quadrature modulators and 4 RF converters. (3) 8 cards. One cPCI middle plane, dual power modules and unit case.<br /><br />The on-board IDT processor runs V.times.Works and has device drivers for the 10 Gbps interface, processing card interfaces, and 100 BT interface with Controller. The software performs the following management and control functions: (1) Download software from Server. (2) Gather and report management statistics on traffic, rates, errors, and faults from the processing cards. (3) Process control functions from the Controller including routing table set up, configuration of PID remapping, PID routing, encryption table key manipulation, stream output rate setting, and insertion of program information tables.<br /><br />The 10 Gbps Ethernet interface receives Ethernet-encapsulated MPEG frames. Details of the packet structure are contained in section 18. The 100 BT management interface supports a management protocol with the server. Functions supported include software download, statistics gathering, enabling and disabling of processing functions, and dynamic stream processing configuration.<br /><br />The gigabit switching card is a common component whose design is shared between the INPUT, STORAGE and GigaQAM units. The gigabit switching card is based on the Broadcom BCM5632 switching device. The Broadcom 5632 is an Ethernet switching element, as shown in FIG. 25, that manages 12 1-Gbit input ports and 1 10-Gbit Ethernet port. It contains a 32K row mapping-table directing packets based on destination address to the appropriate output port. FIG. 26 shows the physical elements that make up the switching card.<br /><br />The stream processing and QAM synthesis of each gigaQAM Synthesizer Module has the following characteristics: (1) 1280 constant bit rate MPEG QAM streams multiplexed into a 5.2 Gbps stream; (2) 128 6 Mhz channels (10 program streams per channel; (3) Future, 96 8 Mhz channels, 10-15 streams per channel, depending upon MPEG resolution; (4) FPGA does the re-multiplexing, encryption and PCR correction; (5) All QAM streams are same data rate; (6) Performs FEC and digital QAM processing; (7) QAM modulator and RF conversion circuit; and (8) QAM demodulator feedback loop for modulator calibration and performance monitoring. The tunable spectrum from 50 Mhz to 860 Mhz.<br /><br />The analog QAM synthesizer and RF conversion portion is shown in FIG. 27. The analog QAM synthesizer accepts FEC encoded and digital filtered data from an FPGA and generates QAM modulated HFC RF signals.<br /><br />The FPGA digital processed data first passes to the DAC to generate base-band I and Q analog signal, after proper filtering then quadrature modulated into 1 GHz to 2 GHz RF signal. With gain control and down-conversion, the final QAM modulated HFC RF signal from 50 MHz to 860 MHz is delivered to the combiner to deliver 128 QAM signals.<br /><br />FPGA groups 4 QAM channel data together and delivers the data to D/A converter AD9773 that can support more than 4 QAM channels data. Quadrature modulator AD8346 will modulate the 4 QAM channel signals together as a band at RF from 1 GHz to 2 GHz range. Four such quadrature modulators outputs will be combined together for the RF down conversion. So one RF down conversion will handle 4 bands signals, total 4.times.4=16 QAM channel. All the 8 down conversion blade outputs will be combined for different clusters, total 128 QAMs.<br /><br />The low phase noise LO generation circuits used for the QAM modulators and the down-conversion mixers are of critical importance to guarantee the 1024QAM performance.<br /><br />Eight RF output from the 8 QAM modulator daughters are combined into different groups for different clusters. FIG. 28 shows the block diagram of the combiner portion. A 20 dB directive coupler is used to deliver the chosen RF signal from the combiner connector to a QAM receiver cable modem for the associated modulator calibration and transmitter performance monitoring.<br /><br />Controller Unit<br /><br />The Controller 800 manages the switching set up of information pathways from the Input unit 400, Storage unit 300 and gigaQAM unit 200. It constructs valid Program Specific Information (PSI) and System Information (SI) for the output multiplexes created at the gigaQAM unit.<br /><br />The Controller 800 may run several functions described within the Mini-Server 500 depending upon configuration.<br /><br />Mini-Server Unit (Multiple Functions)<br /><br />The MiniServer Unit 500 executes one or more selected headend functions. Depending upon loading requirements one or more MiniServer Units may be required. The following functions can be run on the MiniServer Unit 500: (1) MiniBox Management and Control--functions pertaining to configuration, initialization, dynamic control, statistics gathering, alarm collection, and billing and accounting interfaces. (2) Retrovue.TM. Application--all functions pertaining to handling the remote control of the storage server, broadcast-on-demand, and video-on-demand application. In the first generation the On-Screen Display is generated within the MiniBox. In the second generation the On-Screen Display is generated within the MiniServer unit 500. (3) MiniServer Application--execution of applications in a headend server rather than the MiniBox. The MiniBox becomes a graphics display device, and all On-Screen Display screens are generated by the MiniServer Application and transmitted downstream within the MPEG transport stream. The MiniServer implements the full application functionality. In this configuration the primary functions within the MiniBox are display processing, video and interaction, and conditional access decoding.<br /><br />Out-of-Band CMTS Unit<br /><br />The CMTS unit 600 is a 1 RU unit that provides a 100 BT WAN interface and a DOCSIS 1.1 and 2.0 HFC interface. In addition to the normal requirements for a DOCSIS CMTS the CMTS unit 600 also supports the Digital Set-top Box Gateway (DSG) application layer interface for forwarding of multicast out-of-band traffic between the video headend and the settop box. The CMTS unit 600 also supports any video specific modes mandated by the DOCSIS standard, such as the potential requirement for maintaining active registered status with a modem that has lost connectivity in the return path.<br /><br />The MiniBox is a small-footprint economical unit that enables personal digital TV service for the subscriber. The MiniBox provides a high quality analog interface to the TV for audio/video output and interacts with the remote control to provide interactive control of the personal-TV interface, program guide, pause, restart, jump-to-program-beginning, etc. The MiniBox communicates over a return channel with the Server 500. The Server 500 implements real time controls such as channel-switching, pause, restart, etc. Through the MiniBox the subscriber can access both live broadcast digital channels, time-shifted programming and stored video files.<br /><br />For basic service the MiniBox can be used to access digital channels without advanced broadcast time-shifting and video-on-demand functionality.<br /><br />The MiniBox contains a lightweight, in terms of memory and processor requirements, application that interacts with the Server. The MiniBox application paradigm is targeted for low-cost and long-life hardware. The initial application contains electronic program guide and MiniBox configuration, and access to the Retrovue application server in the headend for controlling channel-change and time-shift channel viewing options.<br /><br />Flexible interactive applications are enabled through interaction with an Application Server. The Application Server can run in the MiniServer or in the Controller depending upon configuration.<br /><br />The first generation system provides remote control of virtual-PVR storage for the time-shifted personal TV operation. Subsequent generations of the present invention will support offload of interactive applications processing from the MiniBox via the headend MiniServer unit.<br /><br />When operating in Retrovue.TM. mode each subscriber's digital channel stream is encrypted with unique keys. Content is not switched to the subscriber unless the subscriber is authorized.<br /><br />System Interfaces<br /><br />The present invention supports multiple external and internal interfaces, and the ability to carry multiple conditional access systems. Video-to-Input Interface includes video that is not Ethernet encapsulated is captured via ASI interfaces to external video processing units.<br /><br />The Input-to-Storage interface is Gigabit Ethernet. The primary flow over the WAN interface is Ethernet encapsulated MPEG transport streams. MPEG transport streams may be encapsulated in raw Ethernet, or may be encapsulated via IP, UDP, and RTP depending on system configuration options.<br /><br />The format of the stream is IP-encapsulated MPEG. The MPEG stream is frame-asynchronous, e.g. the arrival time of the encapsulated MPEG frames is decoupled from the actual output timing, i.e. the Program Clock Reference (PCR) is not synchronized with the Ethernet frame timing. Downstream units are responsible to insert sufficient buffering to accommodate end-to-end jitter and also recalculate PCR (and insert null frames when needed).<br /><br />The LAN interface is 10/100 BT Ethernet. It connects to system equipment such as the Controller Unit, MiniServer Unit 500, CMTS 600, Simulcrypt conditional access generators, network managers, and billing and provisioning Servers.<br /><br />It should be noted that 100 BT is also used to interconnect units for management and control functions. Though not shown in the diagram, actually each module contains a single 100 BT interface, and interconnectivity is achieved via an external switching hub.<br /><br />The gigaQAM unit 200 has an HFC Digital Video Interface which is connected directly to the HFC network. The traffic stream is digital video DVB standard MPEG transport streams. MPEG frames are transported via one-way downstream n*6 Mhz (or m*8 Mhz) QAM channels output from the gigaQAM Modulator. No analog traffic is generated.<br /><br />The primary HFC out-of-band interface is DOCSIS. The CMTS provides the DOCSIS interface to the HFC.<br /><br />A second out-of-band interface is based upon DAVIC. An Aloha style two way MAC/PHY may be used for communications with DAVIC-based settop boxes. More details of the DAVIC OOB interface are described in the OOB module description below.<br /><br />The storage to gigaQAM interface is 10 GIGE, with 5.2 Gbps of embedded MPEG traffic at full capacity. The interface carries MPEG frames including program streams, program tables, and frame-tags. The encapsulated MPEG stream is asynchronous with respect to the output clock. Frames from multiple program streams and multiple transport streams are clumped in variable size bundles. Individual program streams are differentiated by Ethernet multicast addressing. Each program stream is encapsulated within Ethernet frames addressed to an Ethernet multicast address unique to that program stream.<br /><br />The Controller/MiniServer-to-Storage/gigaQAM/Input Interface from the server to other Units is 100 BT. Configuration, diagnosis, and real-time control functions are performed over this interface. A typical installation will utilize an external switching hub to interconnect the server module with the other modules.<br /><br />The management function uses this interface to provision each unit, download software, collect management statistics, and collect alarms.<br /><br />The Retrovue set-top box channel-change function uses this interface. Upon receipt of the channel-change control from the set-top, the Controller issues to the Storage unit a command to map a new program stream to the specific set-top, identified as a stream-destination associated with a specific QAM output transport stream.<br /><br />The MiniServer function uses this interface. There are two modes. In the settop-based application mode the Real Time Control Protocol is used to communicate between the MiniServer and the Retrovue application in the settop box. In the applications thin-client mode On Screen Display (OSD) is generated by applications within the MiniServer and sent to the MiniBox via the Ethernet interface to the Storage Unit or gigaQAM processor Unit, depending on configuration options.<br /><br />Conditional Access<br /><br />Conditional Access (CA) is compatible with DVB standards for carriage of Entitlement Management Messages (EMM) and Entitlement Control Messages (ECM). Present invention is compatible with the DVB Simulcrypt interface and can support 3.sup.rd party conditional access systems if selected by cable operators.<br /><br />The following describes a novel conditional access scheme which uses standard interfaces and protocol elements, but which leverages the two-way communications environment for strong authentication and authorization functions.<br /><br />Conditional Access enables the operator to authorize subscriber access to digital program streams. In recent times the requirement has expanded to Digital Rights Management and copy protection. The present invention offers a cryptographically strong CA solution augmented with physical mechanisms to enhance security.<br /><br />A fundamental requirement for strong security is a two-way communications path. One-way systems open security holes that are difficult and costly to detect and repair.<br /><br />Conditional access is compatible with DVB standards. Standard MPEG protocol elements--Entitlement management Messages (EMM) and Entitlement Control Messages (ECM)--carry entitlement and control messages to manage the encryption of program streams. For systems with concurrent multi-vendor conditional access systems DVB-standard Simulcrypt EMM/ECM protocol extensions and headend interfaces are used. Multiple methods are allowed for encrypting the MPEG digital program streams--DVB scrambling standard, DES standard, and the Motorola-proprietary Digicipher-II DES.<br /><br />The conditional access system of FIG. 29 uses a typical three-layer system for protection of the content. Three streams of information, the content stream, entitlement control stream (ECM), and entitlement management (EMM) are depicted in FIG. 30.<br /><br />The content stream is divided into working periods each of which is scrambled using a different working key. The working key for the next period is delivered ahead of its period and is synchronized using the scrambling control bits in the MPEG header that indicate either an odd or even period key is used.<br /><br />The ECM stream contained working keys for the content stream. Each ECM packet is encrypted using a service key. Only authorized boxes have the service key. The ECM packet also contains among other information the time period during which the service key is valid and is authenticated so that ECMs can only be provided by a valid headend system. ECM packets are provided in-band along with the content for which they provide keys.<br /><br />The EMM stream contains service keys that enable the decryption of the ECM stream. Each EMM is encrypted using a key that is unique to the set top box to which it is sent. This enables the individual authorization of boxes. The EMM stream can be delivered in-band or out-of-band to the set top. Since service keys periodically expire EMMs containing new service keys are sent at a low data rate to individual set top boxes. Like ECMs, EMMs are authenticated using a digital signature to ensure they originate at a valid headend.<br /><br />FIG. 31 outlines the three-level processing of service keys (EMM) and control keys (ECM). The security card in the set top box provides a means to upgrade system security should the current key distribution method be compromised. In addition it can provide other features like pre-authorized viewing and entitlement portability. FIG. 32 outlines the role of the security card. Notice that the security card is not required for initial deployment as the security processor can perform ECM to Working Key transformations. However, when mandated, the security card can augment this functionality to allow a security upgrade.<br /><br />Communications between the security card and the set top box are encrypted and authenticated to prevent piracy.<br /><br />The MiniBox implements the logic necessary for decoding keys for access rights carried in Entitlement Management Messages (EMM) and decoding keys for individual digital channels carried in Entitlement Control Messages (ECM). The MiniBox also contains three descrambling algorithms--DVB scrambling standard, DES encryption standard, and DigiCipher II proprietary encryption. The choice of decryption algorithm is a configuration option. In addition transactions with the secure processor and smart card interface are encrypted using a public key algorithm.<br /><br />The MiniBox contains an onboard security processor that is protected from physical access by subscribers and clone manufacturers by an electronically and physically secure manufacturing technique. The interface between the security processor and the Broadcom 71xx chip is not protected with a public key mechanism. The security processor contains private keys that correspond to the manufactured public keys for the MiniBox. These public keys are not accessible by 71xx software. The interface between the security processor and the 71 xx chip consists of a shared buffer in which EMM and ECM message elements are placed for decryption. The security processor returns the decrypted control words that are subsequently programmed by software into registers for MPEG decoding.<br /><br />An option exists for installation of a smart card--it is expected that this option will only be used if the operator chooses to disable the public key that is used for operation without a smart card, the factory default. The smart card is manufactured to be specific to the serial number of the unit and contains a random number that is combined with one of the unique private keys manufactured within the security processor to generate a new working public and private key.<br /><br />In the headend an important element of the Conditional Access system is the Entitlement Management Message Generator (EMMG). The EMMG is located at the point the digital channel is encrypted or scrambled. Simulcrypt defines the standard interface in the headend and the method for carrying both EMM/ECM key streams embedded within the MPEG transport stream.<br /><br />To facilitate easy integration with simulcrypt Conditional Access (CA) systems that emphasize encryption based on content rather than delivery address, Conditional Access in the Input Concentrator can scramble input channels before storage. This reduces the individual number of EMM streams that are created to authorize users. Since encryption prior to storage is possible it is also likely that certain pre-encrypted content can be stored and replayed without any scrambling in the headend or server.<br /><br />In certain cases it may be desired to support DigiCipher-II. In this case, as shown in FIG. 33, the EMMG is collocated with proprietary DigiCipher-II equipment further upstream as show in FIG. 9. It is preferable however to utilize conditional access so that each subscriber can be given his own unique EMM/ECM keys, strongest security.<br /><br />The Conditional access solution enables the following services: (1) Broadcast TV, IPPV, Order-Ahead PPV, NVOD and Data Broadcasting; (2) Pre-encrypted VOD; (3) Variable pricing of PPV and VOD prior to show-time; (4) Real-time session-based encryption for VOD; (5) SVOD; (6) IPPV use of store-and-forward credit; (7) Subscription upgrade from EPG; (8) Rapid entitlement of any service to subscriber; (9) Geographic, spot, and personal viewing blackouts of any channel or event.<br /><br />A fully secure system requires a method to prevent cloning set-top units. A major advantage of the present two-way system design is that this possibility can be completely eliminated, while with one-way systems, cloning remains a security weak spot unless great care is taken. To prevent cloning and other reasons that are described elsewhere, the system uses a Trusted Agent approach.<br /><br />This arrangement is familiar to those using the Internet. When engaging in purchasing something over the Internet, you are generally asked to transmit your credit card number for payment. When this occurs during your transaction your computer will, if possible, be shifted to operate in what is called the "secure mode".<br /><br />The secure mode in the Internet is based uses sites with called Trusted Agents using certified software, together with Certificates that vouch that the site is in fact the one that it claims to be. The Certificates pass authority around based on each agent certifying another. All communications that transpire in the secure mode are encrypted using a public key authentication.<br /><br />The individual Servers will be connected to one or more Trusted Agent sites via the Internet to store backup information on every MiniBox in the system. This may be done either at a central site with backup facilities elsewhere, or as a distributed function among the Servers themselves.<br /><br />(1) The serial number and public key for the set-top's physical unit; (2) The serial number and public key presently authorized for the set-top's Smart Card; (3) The presently connected Server address; (4) Billing information, including (A) Payment status, (B) Program viewing authorizations, (C) Customer address and other billing information, and (D) Customer number; (5) An optional Internet address.<br /><br />These communications will be minimal, low-data rate, and usually run as background programs. The sorts of communications would include checking for duplicate box or Smart Card numbers in different systems, and customers moving from one site to another. This allow easy movement of customers from system to system and it serves as a deterrent to anyone who might otherwise be tempted to counterfeit a set-top or Smart Card and try to use the devices in another cable system. As these units can be sold in stores there is the possibility of theft. But, anyone using a stolen unit would be spotted very quickly.<br /><br />After a customer purchases a MiniBox from their competitive electronic store, they call a call center and provide the operator with their credit card number, for service, and a 10 digit alphanumeric serial number of their new MiniBox and a 12 digit alphanumeric serial number printed on their Smart Card.<br /><br />The database is quickly checked. The manufacturer of the MiniBox would have entered this data via a CDROM or over the Internet to a Trusted Agent. The data transmitted by the manufacturer would include the serial number and the associated code for the box and the card.<br /><br />When the MiniBox is first set up a message in the clear addressed to the MiniServer sends only the MiniServer serial number. The Trusted Agent checks this, and a reply message sent in encrypted form in the public key of the MiniBox.<br /><br />The base system will evolve. The following outlines several possible extensions to the base system.<br /><br />Further Extensions Of The Invention<br /><br />The following interface extensions are possible: (1) GigE interface becomes n*GigE and 10 GigE; (2) GigE interface becomes 802.17 RPR and Sonet; (3) 10 GigE interface to gigaQAM becomes multiple ASI input interfaces; (4) QAM/RF output interface in gigaQAM becomes multiple ASI output interfaces (gigaMUX); (5) Multiple Ethernet-based MPEG framing formats--internal tagged storage-to-processor, IETF standards for support of other devices.<br /><br />In the MiniBox family DOCSIS is supported in two ways: (1) A MiniBox with a DOCSIS 2.0 return channel; (2) MiniBox with an Ethernet return channel; (3) The Ethernet return channel connects to a DOCSIS cable modem over an in-home Ethernet, or equivalent, network; (4) A variant of the Ethernet-MiniBox is for the in-home network to be CableHome compliant<br /><br />The MicroBox is a smaller and less-expensive set top box that leverages the MiniServer. It contains virtually a zero-footprint RTOS and application environment. The MicroBox is designed for future migration into a television or other consumer video enabled device.<br /><br />Additional products derivable from the baseline include the following: (1) 10 Gbit/12.times.1 Gbit Managed Ethernet Hub; (2) Satellite-to-QAM down-converter (modulator with a satellite receiver front-end); (3) Standalone storage streaming server for other vendor downstream ASI/Ethernet-to-QAM modulator devices.<br /><br />Standalone storage streaming server for other markets such as fiber and DSL.<br /><br />Several elements of the Design involve aggregation of inputs together or distribution of a single data stream to several processing elements. Since data rates are in the 1 to 10 Gigabit range a similar Gigabit switching element is utilized in the INPUT, STORAGE and gigaQAM units.<br /><br />The core-switching component shared by the devices is depicted in FIG. 34 This element is based on a 12.times.1 gigabit+1.times.10 gigabit switching device that operates at wire speed. The switching element is capable of routing Ethernet packets according to a mapping table.<br /><br />The compact PCI (cPCI) backplane and chassis components allow a unit to be constructed with readily available parts that contains 10 slots for processing cards. Eight of the slots are connected to a switching slot via gigabit Ethernet carried over the backplane. FIG. 11 depicts the configuration of the backplane. The switching element is the card at the center of this star configuration.<br /><br />For a system with a redundant switching element this arrangement can be extended with a dual star backplane with each I/O card attached to a pair of switching element cards.<br /><br />Algorithms for Packet Scheduling and PCR Correction<br /><br />Two of the key operations that the multiplexer must perform to produce a valid MPEG-2 Multi-Program Transport Stream (MPTS) from MPEG-2 Single-Program Transport Streams (SPTS) are Packet Scheduling and PCR Correction. Packet Scheduling involves the timing the placement of output packets in the transport multiplex to meet the rate requirements for stream. PCR correction involves generation of timestamps in the output stream.<br /><br />The PCR is a timing field placed at intervals of no greater than 100 mS apart in MPEG-2 transport packets. This field is a 42-bit sample of the 27 MHz clock used when encoding the program. A decoder used this field to construct a clock signal that has the same frequency as the original clock used during encoding. The decoder uses this for the following purposes: (1) Synchronization of the rate of consumption of data with the production rate of data; (2) A reference for the synchronization of elementary streams; and (3) Other system level purposes, notably the generation of a clock controlling chroma sub-carrier.<br /><br />The problem facing the multiplexer, as shown in FIG. 36, is to correct the PCR transmitted to account for differences in data rate of the output transport stream versus the input transport stream. In FIG. 37 an input Transport Stream is depicted running at a clock rate that is slower that the clock rate of the output Transport Stream. For example an input Transport stream from a satellite might be received at 27 Mb/S and be remultiplexed by the multiplexer for use within a cable system at 28 Mb/S.<br /><br />The value of PCR2, 2 must therefore be adjusted so that the time between PCR timestamps is kept the same (i.e. T.sub.1=T.sub.2.)<br /><br />Notice that two elements of the correction are required, a shift of the time value representing the fact that the output packet is not output at precisely the same time as the input packet arrives due to a phase difference between incoming and outgoing packets and also perturbation of the packets position in the stream due to other packets multiplexed in the output stream. In addition the time value of the PCR is adjusted because the length of time to transmit a packet at rate R2 is less than the time to transmit a packet at rate R1.<br /><br />A PCR discontinuity is a jump in the PCR timestamp value. It is signaled by a bit in the MPEG header. Since the delta value applied to PCRs should remain the same the Multiplexer should continue to apply the correction required for a rate change between the original input multiplex and the output multiplex.<br /><br />Between two packets bearing a PCR in a program stream MPEG-2 defines the data rate must be piecewise constant. Thus the number of bits and the PCR timestamps in a stream can be used to determine the expected delivery rate of a segment of a stream between two PCR timestamped packets. Within a time interval defined by the smallest PCR to PCR difference we must schedule output packets at the appropriate rate, preferably uniformly distributing packets over this interval.<br /><br />Ideally the multiplexed multi-program transport stream created from several input single program transport multiplexes should have packets uniformly distributed within a time period and with null packets created at appropriate intervals to bring up the transport stream data rate to the required level. Within an arbitrary time interval the combined data rates will not exactly match the exact rate required because data is only available in 188 byte packets. The diagram in FIG. 38 shows time represented on the X-axis, but with varying amounts of data represented between PCR timestamps of a stream. This shows the scheduling period between two PCR timestamps. The period is between the closest timestamps across all the streams that make up the multiplex. For each stream the bit rate Ri is constant between adjacent PCR timestamps. Therefore for Si the rate is:<br /><br />##EQU00001## Where B.sub.i represents the number of bits between PCR timestamped packets in stream i and PCR.sub.i,j and PCR.sub.ij+j represent the clock sample in adjacent PCR timestamped packets. In practical systems the rate information will change in a couple of ways:<br /><br />TABLE-US-00002 Constant Bit R.sub.i varies little from an average (.+-.5%). Average Rate (CBR) bit rates around 4 Mb/S. Variable Bit R.sub.i varies depending on scene complexity (.+-.200%) Rate (VBR) but still has an average value. Average bit rates around 3.25 Mb/S.<br /><br />FIG. 39 depicts several single program transport streams in FIFOs being multiplexed into an output transport multiplex. Note that the FIFO must be long enough to contain two PCR timestamps for a stream. MPEG-2 rules state that the maximum distance between PCR timestamps shall be 100 mS. PCR.sub.i,jPCR time for timestamp j within stream i F.sub.iBits present in FIFO for stream i T.sub.iTime between PCR timestamps (PCR.sub.i,j+1-PCR.sub.i,j) D.sub.iData bits between PCR timestamps R.sub.iRate in packets per second of stream i (D.sub.i/T.sub.i)<br /><br />Assuming that the peak data rate of each stream never exceeds some value P, (for example 4 Mb/S) then the peak output data rate is the number of streams (N) multiplied by the P (E.g. 10 streams at 4 Mb/S=40 Mb/S.) However, the actual data rate at any time might be lower than the peak rate and therefore the sum of the data streams would be slightly lower than the peak rate (N*P). The output Transport Multiplex must have a fixed data rate (typically 38.8 Mb/S for 256 QAM channels). Null packets must therefore be generated with a rate of: R.sub.null=OutputTransportRate-.SIGMA.R.sub.i<br /><br />Where R.sub.i is the rate in effect for each stream during the scheduling period. Note that this can potentially change at the end of every scheduling period as illustrated in FIG. 38. R.sub.null can be thought of as a virtual stream of null packets to be multiplexed uniformly with packets from the other streams.<br /><br />The FIFO Fullness algorithm simulates real-time data arrival rates for each stream by computing a value P for each input FIFO that represents the number of bits waiting to be output from the FIFO. When the number of bits exceeds a whole transport packet then that input FIFO is selected for output. During the time period T.sub.output.sub.--.sub.pkt that it takes to output a transport packet at the output transport rate a certain number of bits will arrive at the input FIFO according to the rate for that stream. If no FIFO contains enough data to output a whole transport packet then a Null transport packet is generated. Note that by creating Null packets when required that R.sub.null does not need to be calculated explicitly.<br /><br />For the next time period in which output is scheduled: Compute input bits per stream for T.sub.output.sub.--.sub.pkt=B.sub.i(Bits_per_packet*R.sub.i/Transport_Rat- e)<br /><br />Load a vector F with B (P.sub.i=B.sub.i)<br /><br />Set T a time value representing the PCR value at the start of the period.<br /><br />Select the stream i with the largest value of P.<br /><br />If P>=1504 bits (188 byte transport packet) then output the packet from stream i. Decrement the value of F for stream i. If F<1504 then output a null packet.<br /><br />For every element of the vector F, increment the value of F with the number of bits that would enter the FIFO during the output time period Bi<br /><br />Increment T by the number of 27 MHz clock ticks represented by the time taken to output a single packet at the Output Transport Rate.<br /><br />Return to Step 4 until T is >=the PCR value at the end of the period.<br /><br />A pseudo-code version of this algorithm is:<br /><br />TABLE-US-00003 int B[NSTREAMS]; // input FIFO bytes per output time period for (each stream i){ // compute bits entering FIFO during 1 packet of output B[i] = BITSPERPACKET * R[i] / TRANSPORTRATE; } int F[NSTREAMS+1]; // Predicted bits in input FIFO for stream if ( INITIALIZE ) { for ( i=0; i<NSTREAMS; i++) { F[i] = rate[i]; } } else { // retain F from previous period } int currentPCR = initialPCR; // PCR time for this packet int selectedStream=0, selectedP=0; // stream selected for output while ( currentPCR < finalPCR) { // Find fullest FIFO selectedF = F[0]; selectedStream = 0; for (i=1; i<NSTREAMS; i++) { if ( F[i] > selectedF ) { selectedStream = i; selectedP = F[i]; } } if ( selectedF < BITSPERPACKET) { // No FIFO has enough bits, output a NULL packets; OutputNull( ); } else { // Output from fullest FIFO OutputFIFO(selectedF); // subtract 1 packets worth of bits from FIFO F[selectedStream] -= BITSPERPACKET; } // Increment predicted bits for next round for (i=0; i<NSTREAMS; i++) { F[i] += B[i]; } currentPCR += PCRTICKSPERPACKET; }<br /><br />Note that the computations are performed on perfect values of PCR clock and Output QAM rate and final adjustment of the PCR following scheduling is required. The algorithm outlined above will also require adjustment for integer round off error when computing the output rates Bi when the ratio of the input rate to output rate is not a whole number. Further null packet generation is also possibly required to match real QAM output rate to the value used in scheduling.<br /><br />The real input FIFO supplying data for each single program transport stream should be kept full from data retrieved from disk. A per-stream flow control mechanism is required to assure that enough data is available when needed, but too much data is never delivered. Mechanisms to do this include: Flow control feedback to the disk controller/subsystem. Buffer fullness measurements based upon an observed average data stream rate.<br /><br />MPEG-2 rules indicate that when a PCR discontinuity is indicated the data rate between the PCR with the discontinuity indicator set and the previous PCR remains the same as the rate for the previous segment. To simplify explanation of the scheduling algorithm this feature has not been included in the pseudo-code described above.<br /><br />The theoretical System Time Clock used at the MPEG-2 encoder and at the decoder is 27 MHz. Given perfect clocks the decoder should consume data at exactly the same rate that the encoder creates data. However, instead of perfect, independent clocks it is more practical to keep clocks between the producer and consumer synchronized at an arbitrary rate near to the required 27 MHz rate using the PCR mechanism.<br /><br />In theory a similar problem between the original encoder and the multiplexer exists as exists between the multiplexer and the set top box. Namely that the multiplexer will send out data to the set top box faster than data is arriving. In practice the allowable clock difference is 1620 Hz (27,000,000.+-.810) and at this rate difference it would take many days to consume the buffer capacity built into the Storage/Multiplexer system.<br /><br />When computations on scheduling packets are done they using theoretical input rates of 27 MHz and theoretical output rates of 38.8 MHz without regard to the actual input rate or output rate. Since the output rate may vary slightly from this rate it is left to the final stage of processing and PCR correction to occasionally insert a null packet to account for any long-term under-run from data supplied by the scheduler. It is assumed that a long-term over-run of data cannot occur, as the clock will run slightly faster than it's theoretical perfect rate.<br /><br />Head-End Interconnection Protocols<br /><br />The components depicted in FIG. 40 constitute a complete system for the network PVR system RetroVue. Components are interconnected via 1 and 10 gigabit Ethernet links. The INPUT unit captures MPEG-2 transports streams on ASI and DHEI interfaces and encapsulates them for transmission over 1 gigabit Ethernet to the STORAGE. Multiple INPUT units can transmit to multiple the STORAGE units. The STORAGE unit transmits data over 10 Gigabit Ethernet to the GIGAQAM unit for MPEG-2 remultiplexing, and transcoding into RF output.<br /><br />The BCM-5632 and other switching chips provide a good technology for logical backplanes because of their greater than wire-speed switching speed and large-table address filtering features. Such chips enable creative utilization of layer two addressing to "route" and replicate packets, multicast, to multiple ports, cards and modules. Such backplanes are usually self contained within platforms, but can be logically extended across multiple platforms. In the multi-platform configuration, because of the novel and creative use of addresses, care must be taken to either isolate the inter-platform traffic from other "data" traffic, layer two and layer three, or the addressing has to be designed to be externally consistent with global addressing standards and usage. Note: "route" as used in this document is used to describe that packet processing function which inspects a layer two address of a packet received from an input port, and, based upon an address filter table, decides on which output port or ports the packet needs to be forwarded. Such chips enable routing of packets based upon IEEE 802 48 bit destination addressing, both unicast and multicast. Thus any algorithm which utilizes the filtering logic of such chips must translate information that affects routing into 48 bit IEEE 802 addresses.<br /><br />In IEEE 802, layer 2, and Ethernet are used interchangeably through the document. Technically IEEE 802 is the correct descriptor for the addressing. Ethernet is the correct descriptor for the MAC layer functionality of the hub switching chip.<br /><br />To setup and manage the address and filtering tables of the switching chip a controller application is required. The normal dynamic learning and dynamic aging functions of the switching chip will not be usable in these applications without dynamic configuration and intervention from the controller.<br /><br />Filtering can be implemented for both layer two unicast addresses and layer two multicast addresses. The advantage of multicast addresses is that the hub switch chip can replicate the packet to multiple destinations.<br /><br />In video, single program transport stream (SPTS) applications the normal use of the switching chip would be to route and replicate packets based upon unique per-stream IEEE 802 addressing. It is assumed that each packet contains MPEG frames from no more than one SPTS (thus no further demultiplexing and remultiplexing is required within the SPTS packet). In a backbone distribution scenario, if a single stream were to be replicated to multiple destinations, then multicast addressing would be used. For a single destination distribution unicast addressing can be used. It is also acceptable, though inefficient, to achieve multicasting by flooding, replicating, packets with different unicast addresses.<br /><br />A SPTS has one or more PIDs that are logically associated together as a single program stream, e.g. audio+video. Section 18.3 below describes a novel use of addressing, as time-markers, to spread SPTS packet chains across multiple storage disks, even variable size. The address table within the switching chip has sufficient capacity to support the large number of addresses. This addressing mechanism must be contained within the logical disk storage platform.<br /><br />In distributing SPTS packet streams to downstream EdgeQAM and EdgeMux devices, the preferred form of addressing is layer two unicast and multicast addressing. While it is possible to create structured addressing which maps to specific cards and QAM transport streams, such use is not recommended because it requires the upstream traffic generators, e.g. the node which de-spools video files and cached video from disk, to have configuration knowledge of the destination device, and also prevents the use of multicast replication, e.g. where one stream goes to multiple output QAM ports in order to feed multiple fiber nodes. If unique unstructured addressing is used, then the SPTS can be delivered to any type of device, and over any type of intermediate layer two switching network. Definition EdgeQAM: a device which accepts video input from one or more ethernet and/or ASI input ports, provides multiplexing functions, builds one or more transport streams, modulates each transport stream, and delivers the transport stream over one or more RF output interfaces.<br /><br />Definition EdgeMux: a device which accepts video input from one or more ethernet and/or ASI input ports, provides multiplexing functions, builds one or more transport streams, and delivers the transport stream over one or more ASI or ethernet output interfaces.<br /><br />IETF standards do exist for carriage of embedded MPEG SPTS transport streams within IP networks. They are being utilized initially between VOD vendors and EdgeQAM and EdgeMUX vendors. The two standardized forms are: (1) UDP/IP encapsulation, and (2) RTP/UDP/IP encapsulation. A third form seems worth standardizing: (3) Raw Ethernet encapsulation.<br /><br />In the historic design of MPEG systems, the MPEG PID was considered necessary and sufficient for addressing. This was feasible because the implementation architectures were star-topology networks. A physical or logical (Sonet, 6 Mhz QAM/QPSK) channel was used to encapsulate all packets. Indeed, in a multiple program transport stream, within a specific channel wrapper, all PIDs are unique.<br /><br />In converged IP and Ethernet systems the MPEG PID is no longer unique. Even worse, the PID, which is data not header from the perspective of the transport layer, cannot be used as input to the routing function that is extant in layer two and layer three switching and routing infrastructures.<br /><br />Several addressing attributes can be considered in identifying and switching/routing single program streams: (1) IP Source address; (2) UDP source port; (3) Ethernet Source addressIP Destination address, unicast or multicast; (4) UDP Destination port; (5) Ethernet Destination address, unicast or multicast; (6) MPEG PID.<br /><br />In a simple world, one standard would be defined. In the real world flexibility is required to accommodate a variety of formats.<br /><br />One of the more common forms of generating unique SPTS addresses will be at the IP layer--the 4-tuple <IP Source Address, IP Destination Address, UDP source port, UDP destination port> will unambiguously identify a SPTS. In such addressing schemes the layer two addressing may not help at all, e.g. all packets from all SPTS streams may be addressed from one Ethernet Source Address and delivered to one Ethernet Destination Address. If such packets are to be processed by a switching hub backplane, they will need to be reencapsulated into ethernet packets with a destination address that is uniquely associated with the above 4-tuple. This requirement suggests a wire-speed input classification function that inspects the 4-tuple and outputs a unique ethernet address via a classification-mapping table. The filter function of the switching hub backplane can then be configured to filter and deliver all packets of a specific SPTS stream to a specific port or group of ports.<br /><br />For IP layer addressing a common convergence standard is MPLS. An IP router is capable of classifying traffic based upon the above 4-tuple and delivering the packets to specific MPLS labeled tunnels between routers. However the MPLS encapsulation does not typically extend to end devices--so neither the source nor destination end device will necessarily see the MPLS tunneling layer. However, there are devices that do receive packets from MPLS tunnels, such as the Cisco video router.<br /><br />It is possible for the layer two header of a SPTS stream to be defined in a way to assist the filtering and routing function. A source can transmit a SPTS stream with an IP multicast address. IP defines a mapping between the IP multicast address and a layer two multicast address, the bottom 23 bits of the 28 bit IP multicast address are mapped to a bit-by-bit-identical ethernet multicast address. Because of the partial mapping there can be 32 IP multicast addresses which map to a single ethernet multicast address. IP multicast addresses will be unique--i.e. each SPTS will be carried with a separate but unique IP multicast address. If the network and application administrator assures no overlap of IP addresses, i.e. does not vary the upper 32 bits, then each SPTS has a unique layer two multicast address. In this configuration the switching hub has sufficient information to enable it to route and replicate packets at wire speed. Note: because of the high-order 32 bit truncation problem for the IP to IEEE 802 address mapping, it is possible in converged-services networks that packets may be routed and received from multicast addresses that are not meant to be received. Such packets will occupy link bandwidth during transport, and switching bandwidth across the bus. The module that receives such packets will need to provide a final filtering function to reject those packets that should not be received, by inspecting the IP address at a minimum, and the 4-tuple at the maximum.<br /><br />Other addressing variants are certainly possible. A system can be conceived which operates purely at layer two. A source can be conceived that originates SPTS streams with unique layer two multicast addresses, but only unique to the originator. In this configuration the destination will have to inspect the source address to disambiguate two separate SPTS streams received with the same destination address. Likely however such a system will have a dynamic address resolution protocol that assures uniqueness across multiple sources. Alternately the multicast address space can be administratively partitioned statically so that no address space overlap occurs between originators. In either case, address resolution protocol or static configuration of unique addresses, the layer two switching environment can uniquely route packets across backbones and internal hub switching fabrics to their intended destination. The three different units have both preferred protocols for communication with each other and compatibility protocols with the equivalent units from other manufacturers.<br /><br />In all encapsulations similar techniques are used to pack MPEG-2 transport packets of length 188 bytes into standard Ethernet or 802.3 packets with a maximum payload of 1500 or 1492 bytes respectively. An MPEG Single Program Transport Stream (SPTS) contains a collection of PID addressed transport packets and up to seven 188 packets are encapsulated within the same Ethernet frame.<br /><br />The following table outlines the requirements of the various devices components for processing different protocols.<br /><br />TABLE-US-00004 COMPONENT INPUT PROTOCOL OUTPUT PROTOCOL INPUT ASI, DHEI TSE DAE SIE IP SPTS STORAGE TSE DAE SIE IP SPTS GIGAQAM DAE RF SPECTRUM SIE IP SPTS<br /><br />A single program transport stream, consisting of packets identified with several PIDs encapsulated with a destination Ethernet address that is composed of two elements: Stream Identifier is a system value used to uniquely identify an SPTS and therefore the disk based circular buffer that will store the stream. Time Slice Identifier, as shown in FIG. 41, is a value that identifies the point in time at which a given collection of packets arrived. Each time slice is stored on a different logical disk within the STORE.<br /><br />The Ethernet frame is formulated with the following structure shown in FIG. 42. Notice for this internal Ethernet packet the type is can also be made unique to allow these frames to be identified as non-IP frames (type 0800.) The Ethernet source address can be used to identify the unique stream identifier, the ASI port of ingress or the daughterboard aggregating ASI inputs into the Ethernet stream.<br /><br />The Destination Addressed Ethernet SPTS protocol, as shown in FIG. 43, utilizes a conventional addressing scheme in which the destination Ethernet addresses identify a stream. Each SPTS is identified with a unique multicast destination address. Multicast Destination addresses are used to route an SPTS to more than one device. This protocol is used between the STORAGE device and the gigaQAM with multiple source disks each with a different Ethernet source address sending slices of a video stream to daughterboard of the gigaQAM device. The destination Ethernet address is encoded as shown in FIG. 44 to identify the daughterboard and the Multi-Program Transport Stream in which the video stream is to be multiplexed. Note that streams can be differentiated by the Ethernet type field. In the gigaQAM card the MPTS Id portion of the Ethernet address is used to demultiplex the Ethernet packet further. The Card Id portion of the address is used to route the packet through the switch to the appropriate gigaQAM card.<br /><br />Ethernet packets, as shown in FIG. 45, from external sources may not comply with the destination addressing scheme, due to lack of knowledge of the internal structure of the system. In this case it is likely that a combination of the source address and an identifier contained within the Ethernet payload will uniquely identify the video SPTS. An example Ethernet frame structure for carriage of transport packets is shown in FIG. 46. However, this is only one of many possible structures and devices that receive data in this form must be parameterizable to extract the stream identifier. Note that packets using this method can be differentiated by the Ethernet type.<br /><br />To facilitate carriage of MPEG SPTSs through general purpose networks it is likely that groups of MPEG transport packets will be encapsulated in IP/UDP packets, as shown in FIG. 47. None of the existing Requests for Comments (RFCs) RFC 2250, 2343 address carriage of transport packets only carriage of packetized elementary streams (PES), this omission makes sense when only IP to IP device communication is considered but is not useful for interworking with conventional synchronous cable and satellite networks. The protocol format required is likely to have the structure of FIG. 48, where stream identifier is mapped onto either UDP port number or a header within the application packet.<br /><br />The input unit 400 is likely to be interfaced with external video servers and directly to edge QAM devices as it provided an economical ASI/DHEI to gigabit Ethernet transcoding device. It should therefore support all of the above protocols.<br /><br />The STORAGE device is partially dependant on the ability of INPUT device to provide a time sliced stream it should therefore only implement the Time Slice Ethernet protocol for input purposes but should support all other protocols to external edge QAM devices, although the preferred output protocol is Destination Addressed Ethernet SPTS.<br /><br />The gigaQAM device is likely to have to interface with a variety of video and data server equipment and must support a variety of protocols. With 10 Gb/S input very little processing is feasible before switching to internal multiplexing/QAM cards. The preferred input in the Destination Addressed Ethernet SPTS described above.<br /><br />All of the above protocol descriptions are an attempt to map a layer 4 stream address onto elements of the underlying layer 2 Ethernet and IP layer 3 addressing schemes. This is done to take advantage of switching silicon available for these protocols. The following parameters show how this can be done in a fairly generic way by FPGA and network processing elements. In FIG. 49 the items labeled as Address A--Address G represent positions where an stream id, card id and MPTS id could be positioned in an Ethernet packet. The values X bytes--Z bytes represent fixed gaps between the locations where addresses must be encoded. This assumes that 1-7 MPEG transport packets are encapsulated contiguously in the remainder of the transport packet. Some difficulties in characterizing a generic encapsulation method are: (1) IP 16-bit identification fields that must be incremented with each successive packet; (2) IP and UDP header checksums that must be computed when after the address is inserted; (3) Fixed values to place in the X, Y, and Z parts of the packet; and (4) Unknown parameters that must be computed in the application header.<br /><br />VoD Using Storage Unit Interconnection<br /><br />Due to the requirements that the Storage unit be capable of deriving 1280 streams from a single stored channel and to do so using inexpensive disk drives the Storage unit has available far more disk space than can be utilized in delivering the simple Retrovue service. The additional space cannot be utilized for streaming output services that must be guaranteed to be available but can be utilized for services such as VoD in which service can be denied if resources are not available.<br /><br />In order to make available the additional disk space the following interconnection between Storage units is allowed. FIG. 50 shows interconnection via IEEE 802.17 Resiliant Packet Ring, but it is also possible to interconnect systems using a GIGE interface to an Ethernet switching infrastructure. Interconnection in this fashion will also require a reliable file transfer protocol, unicast and/or multicast, for transfer of video files between Storage units.<br /><br />Server Unit<br /><br />The server is a generic platform that can run the management application, Retrovue Application, OOB application, and/or MiniServer application. One or more servers may be configured in a given system depending upon traffic loading and capacity.<br /><br />The Server is a 3 RU server based on an Intel Pentium hardware platform, with memory and fans. The optional OOB module is installed in one of the PCI interface connectors.<br /><br />The Server has the following external interfaces: (1) 110/220 Power, on-board power supply; (2) LEDs for system status monitoring; (3) A 10/100 BT connector for access to: (a) management network, (b) server modules management control; (c) out-of-band network to the home, e.g. external CMTS; (d) pare RF connectors, when OOB module installed.<br /><br />The server contains a Linux kernel, TCP/IP stacks, SNMP stack, embedded HTML server, primary management interface, OOB DAVIC MAC subset, when OOB hardware module is installed, and the application. The server module implements the following software functions: (1) Be the headend endpoint for point to point communications to all MiniBoxes, 1:N bi-directional; (2) via ethernet to external OOB communications infrastructure, or (3) via internal interface to internal OOB module; (4) Remote control protocol and associated application Channel change; (5) Pause, Rewind, Forward, Restart; (6) Control of all MPEG streaming functions; (7) PID assignments routing to specific QAM transport streams; (8) Generation/forwarding of OOB downstream elements; (9) Conditional Access (CA) messages including entitlements; (10) System Information (SI) messages; (11) Electronic Program Guide (EPG) messages; (12) Emergency alert System (EAS) messages; (13) Other generic messages; (14) File transfer interface with hierarchical VOD system elements for transferring VOD files into Server; also functions for deleting or modifying stored VOD files; (15) Management interface to redirect stream (pre-encrypted) to management station, for viewing of output quality; (16) Cable labs DSG protocol and associated MIB, possible adaptations for DVB; and (17) Interface with one or more external Simulcrypt CA systems.<br /><br />The server has an option for installation of one or more OOB RF cards. These cards implement the PHY layer of the OOB protocol. The OOB MAC is implemented in software within the server. The OOB MAC and PHY implementation contains all registration, ranging, authentication, and networking functions needed for communicating with STBs.<br /><br />The OOB implementation is based upon the following assumptions: (1) Is compatible with hardware capability of Broadcom 7100 hardware (DAVIC); (2) Low upstream bit rate (300-600 bps average) (3) Collisions and retransmissions handled with upper layer protocol mechanisms; (4) Only a single IP/MAC address on the network side; (5) No bridging or routing forwarding functions--all traffic is forwarded to the host; (6) Potential elimination of scheduled-TDMA traffic modes, all upstream traffic is contention based, and bit-rate is low; (7) No traffic filtering or prioritization functions; (8) Simplified DAVIC OOB protocol, for compatibility with Broadcom 7100.<br /><br />It is also possible to install the OOB function without any other server functions. In this case the OOB implementation is expanded to include the following additional functions: (1) O(1000) MAC and IP address learning table; (2) Bridging function for learning, filtering, and forwarding traffic between OOB HFC RF and 100 BT port; and (3) No classification functions.<br /><br />The server is a typical Pentium-4 845D chipset powered PC unit. It provides one 10/100BaseT Ethernet interface to control different modules through a 10/100BaseT hub. It also provides 6 PCI slots for possible up to 6 OOB PCI cards plug-in. The 6 OOB PCI cards support total 24 upstream burst receivers and 6 OOB down stream transmitters.<br /><br />The server motherboard block diagram is shown in FIG. 51. The motherboard VC17, as shown in FIG. 51, made by FIC can directly used as our controller unit motherboard.<br /><br />The controller unit will be 3 RU for supporting OOB PCI cards. The preferred mechanism for MiniBox communication is through a DOCSIS return path. However an economical return path can be implemented using a DAVIC style mechanism.<br /><br />This section describes the design of an OOB system which utilizes DAVIC PHY layer and an Aloha style MAC layer. The implementation consists of the OOB Forward Data Channel (FDC) transmitter and OOB Reverse Data Channel (RDC) Receiver. The OOB Transmitter and Receiver card is installed in the Server Unit.<br /><br />Head-End OOB Burst Receivers And Transmitter PCI<br /><br />The head-end OOB PCI card is implemented for 4 head-end OOB burst receivers and one headend OOB transmitter. FIG. 52 shows the block diagram of the head end OOB PCI card.<br /><br />One PCI Card with four head-end OOB Burst Receivers and one head-end OOB Transmitter is implemented. But MAC layer is not included. It is expected that the host CPU will do the MAC processing and interface to the OOB card via PCI interface.<br /><br />If the DAVIC MAC dedicated hardware implementation is required, additional one microprocessor, one FPGA and some memory can be added or a MAC IC can be added. If necessary two such cards can be used for one complete system.<br /><br />Heandend OOB Transmitter Requirement: Support DVS-167 (DAVIC): data rate 1.544 Mb/s (1 MHz band) or 3.088 Mb/s (2 MHz band), Differential QPSK, Frequency range 70 MHz.about.130 MHz., frequency step size 250 kHz. RS code (55, 53,T=1) over GF(256) 8-bit symbol. Framing Signaling Link Extended Super-frame SL_ESF Interleaving: Convolutional (55, 5) Support DVS-178 GI: Data rate 2.048 Mb/s (1.8 MHz band); Differential QPSK for 90-degree phase invariance, Frequency range 72.75 MHz, 75.25 MHz, or 104.2 MHz, RS code (96, 94 T=1) over GF(256) 8-bit symbol. Locked to MPEG2-TS, two FEC blocks per MPEG packet. Interleaving: Convolutional (96, 8)<br /><br />Similar implementation for the In-Band QAM transmitter is used for the Headend OOB Transmitter. HE needs to do QPSK modulator with differential coding and roll-off 0.3 or 0.5, RS encoder, randomizer and convolutional interleaver. FPGA for the channel FEC encoding can either support DAVIC or DCII format.<br /><br />BCM7100 OOB FDC Receiver supports: DAVIC: de-randomizer, RS decoder (55,53),output stream with appropriate DAVIC controls, interfacing with the on-chip DAVIC MAC. DCII: RS (96,94), Output clock, data and sync signals to the on-chip transport demux.<br /><br />BCM7100 supports the following OOB burst modulator: Starvue, 4/16-QAM, burst FEC, 1 kb burst FIFO. Programmable randomizer, programmable RS encoder, programmable preamble prepend, programmable symbol mapper, programmable pre-equalizer. Roll-off Nyquist factor 0.25 or 0.5. It meets the DAVIC and DCII requirement: DVS-167 (DAVIC): Data rate 256 Kb/s(200 kHz), 1.544 Mb/s (1 MHz band) or 3.088 Mb/s (2 MHz band), Differential QPSK, Frequency range 8 MHz.about.26.5 MHz., Frequency step size 50 kHz. Power Level: 25 dBmV.about.53 dBmV, DVS-178 GI: Data rate 256 Kb/s/s (192 KHz band); Differential QPSK for 90-degree phase invariance, Frequency range 8.096.about.40.160 MHz, Frequency step size 192 KHz. Power level: 24 dBmV.about.60 dBmV. BCM3138 dual burst receiver with external ADCs are used for the burs demodulator and decoder.<br /><br />The burst receiver FPGA implementation can be done with lower cost, but quite some effort is required.<br /><br />The implementation does not include a dedicated complete DAVIC or DOCSIS MAC layer. Based on head-end video service system architecture the following MAC elements are supported: (1) Ranging and (2) Contention. The host CPU will do the necessary MAC processing and interface the application and MAC with the OOB card via PCI interface, If a complete DAVIC 1.2 MAC layer dedicated hardware implementation is required, additional one microprocessor, one FPGA and some memory can be added or a MAC IC can be added. The above head-end OOB burst receivers and transmitter PCI implementation is scaleable.<br /><br />The preferred return path mechanism for MiniBox return path messages is via a DOCSIS compliant return path. A standard single rack unit Phoenix CMTS implements this unit.<br /><br />Two extensions to the base Phoenix system are implemented for present invention. (1) a Cablelabs standard for efficient multicasting of OOB data over DOCSIS --Digital Settop Gateway (DSG); and (2) a Cablelabs extension for support of one-way DOCSIS settop boxes--i.e. continued operation in the case of a lost return path. The CMTS may or may not be shared for other services such as data and voice. The determination of which services and devices are handled by each CMTS are configuration options under the control of the operator. When necessary two such cards that support 8 OOB burst receivers and two OOB transmitters can be used for a complete system.<br /><br />MiniBox Hardware<br /><br />The MiniBox is a low cost, high performance set top box with a very small form factor -5.5''.times.6.0''.times.1.2''. The core of the MiniBox is the Broadcom BCM7100. The MiniBox contains a smartcard interface for support of multiple conditional access schemes. A tamperproof design is also implemented which prevents access to video signals in the clear. DAVIC/DVB OOB is supported for the return channel. For the video and audio interface, the MiniBox provides high quality Super-VHS (S-video) with stereo audio L/R outputs for TV, a composite video output is also available for a second TV or VCR. Other options, like digital audio SPDIF and component video RGB outputs are possible. On-Screen Display eliminates the need for a front panel. 5 LEDs are used to indicate the basic functionality, like Power ON, IR receive status, In-Band down stream lock, Out of Band Forward Data Channel reception lock and OOB reverse data channel Transmission. An External power transformer provides a single DC input to the MiniBox.<br /><br />The software for the MiniBox is a sturdy environment with features for handling all hardware interfaces, graphics, video, conditional access, and communications with the server. Unlike other overweight designs, the function of the MiniBox is optimized for local I/O handling and video processing, but is not a general computer workstation environment for MSO and user applications that becomes obsolete in a short time. The MiniBox is also not a residential gateway.<br /><br />The following elements are contained within the software environment: Real time kernel--board support package, multitasking, peripherals. Slim middleware environment--application development interface, screen and I/O handling, networking environment, for communications to server)<br /><br />Conditional access module (coordinates function with secure hardware co-processor. Electronic Program Guide, database and user interface. application environment, remote control interface, popups, etc.<br /><br />The block diagram of the MiniBox is shown in FIG. 53. The BCM7100 single chip is used as both the communications engine and the MPEG video processing engine.<br /><br />The RF interface portion accepts a downstream In-Band QAM signal from 60 MHz to 900 MHz with level from -15 dBmV to +15 dBmV. The silicon tuner iC BCM3415 is used for the RF to IF downconversion.<br /><br />The communication engine supports QAM demodulation and channel FEC decoding according to Annex A/C (DVB down stream QAM) for 8 MHz bandwidth channel and Annex B (DOCSIS QAM) for 6 MHz bandwidth channel. The MPEG-2 transport stream received by the QAM demodulator/FEC decoder is delivered to the on chip MPEG video-processing engine.<br /><br />OOB Forward Data Channel accepts RF signal with frequency from 70 MHz to 130 MHz via a down converter IC. Both DVS-178 and DVS-167 OOB Forward Data channel standards are supported.<br /><br />OOB Reverse Data channel supports DAVIC and DOCSIS return channel QPSK and 16QAM modulation and programmable FEC burst profiles. The OOB Reverse Data Channel frequency ranges from 5 MHz to 65 MHz with signal level from +24 dBmV to +60 dBmV. The on-chip DAVIC MAC handles the OOB data for all the possible interactive control and applications.<br /><br />For the video and audio interface, the MiniBox provides high quality Super-VHS (S-video) with stereo audio L/R outputs for TV, a composite video output is also available for second TV or VCR. Other options, like digital audio SPDIF and component video RGB outputs are possible. The smart card together with protected tamper resistance hardware design controls the secure conditional access.<br /><br />Additional On-Screen Display replaces the front panel of the set-top box, but additional 5 LEDs are used to indicate the basic functionality, like Power ON, IR receive status, In-Band down stream lock, Out of Band Forward Data Channel reception lock and OOB reverse data channel Transmission.<br /><br />External power transformer provides a single DC input to the MiniBox for all the required on-board power supply.<br /><br />The MiniBox RTOS is ThreadX. ThreadX is chosen in order to maximize the option for source code maintenance at the hardware layer. ThreadX provides utilizes the board support package and provides a core set of multitasking and hardware abstraction services.<br /><br />The MiniBox middleware is OpenTV (or equivalent). OpenTV is chosen because of its support of an adequate graphics set, compatibility with multiple conditional access solutions, availability of a C-based application development environment for development of simple applications, a networking software environment, and availability of core applications and services like the Electronic Program Guide user interface.<br /><br />Features which need to be developed atop this core RTOS and middleware are: one or more 3rd party conditional access software modules. The choice of software module will be based upon the conditional access solution chosen. In all cases it is expected that the vendor will port his stack (if not already available) into the MiniBox environment and make it available as object code.<br /><br />A conditional access software module. We will develop the code ourselves. This stack will communicate with a secure hardware processor via both a smartcard interface (when smartcard installed) and via an internal interface to our on-board secure conditional access silicon.<br /><br />Remote-Control application--handle all interaction with the remote control including popup menus, navigation, and communications with the server via the Remote Control Protocol.<br /><br />The remote control is an infrared transmitter device that doubles as a TV controller and a MiniBox controller. In some cases the operator will desire a universal remote control with mappings from universal buttons to functions. It is anticipated that the functions required to operate a system are contained on most universal controls.<br /><br />The following describes a control specific to present invention. A specific set of buttons is described including mechanical function. If the operator wishes to instead use a universal remote then the buttons of the universal remote are mapped accordingly.<br /><br />The Remote Control contains a battery, infrared transmitter and LED, and a handful of buttons. Some buttons are multi-function, with the function at any instant of time being dependent on the current interactive state of the user interface (only the TV or Box receiver knows the state). Some buttons are in-out pushbuttons (IOP), some are left-right rocker buttons (LRR), some are up-down rocker buttons (UDR), and some are left-right-up-down rocker buttons (LRUDR). LRR and UDR are two buttons in one physical assembly. LRUDR is four buttons in one physical assembly.<br /><br />The remote control will contain a small microprocessor (or equivalent) that is used in programming the remote control for various brands of TVs (e.g. so the volume control buttons and TV-power buttons will work for the TV). The remote contains an extensive table of infrared control codes protocols and sequences. The specific user interface discipline for programming the RC to control the infrared interface of a specific TV is TBD.<br /><br />The following table defines the Remote Control (RC) buttons:<br /><br />TABLE-US-00005 BUTTON TYPE FUNCTION POWER TV IOP POWER ON-OFF TV POWER MINI BOX IOP POWER ON-OFF BOX TV/VIDEO IOP TOGGLE STATEFUL VIDEO-SELECT MODE OF TV (RF->VIDEO-> VIDEO2) 0-9 DIGITS IOP DIGIT INPUT ENTER/RETURN IOP TERMINATE CHANNEL INPUT BEFORE TIME OUT; ALSO DOUBLE AS RETURN TO PREVIOUS CHANNEL CHANNEL UP-DOWN UDR CHANGE CHANNEL TO NEXT CHANNEL IN SEQUENCE; IF INTERACTIVE PROGRAM GUIDE IS SHOWING, THE FUNCTION IS PAGE UP/PAGE DOWN VOLUME UP-DOWN UDR INCREASE/DECREASE VOLUME OF TV MUTE UDR MUTE/UNMUTE SOUND OF TV DISPLAY IOP DISPLAY OR REMOVE INFORMATION BOX WITH INFORMATION RELEVANT TO CURRENT STATE, E.G., INFORMATION ABOUT CURRENT PROGRAM CLR IOP CONTEXT SENSITIVE, CLEAR DISPLAY, RESTORING FULL VIDEO, INITIATE A DELETE OPERATION, E.G., EDITING OF A PROGRAM-GUIDE PLAYLIST MENU IOP DISPLAY INTERACTIVE SCREEN WITH NAVIGATABLE MENU DO-IT IOP PERFORM CURRENT OPERATION GUIDE IOP DISPLAY OR REMOVE, A MENU OF PROGRAM CONTENT VS. TIME LEFT-RIGHT LRUDR SINGLE-POSITION UP-DOWN CURSOR MOVEMENT IN CURSOR CONTROL THE CONTEXT OF THE INTERACTIVE DISPLAY CURRENTLY SHOWING PAUSE IOP PAUSE/UNPAUSE THE VIDEO PLAY IOP UNPAUSE PAUSED VIDEO SKIP-FORWARD LRR SKIP FORWARD OR SKIP SKIP-BACKWARD BACKWARD 15 MINUTES OR EQUIVALENT. JUMP-TO-END IOP JUMP TO END OF PROGRAM OR LIVE, IF PROGRAM IS STILL BEING BROADCAST AND NOT AT PROGRAM END. SUCCESSIVE DEPRESSIONS WILL JUMP FORWARD TO END OF PROGRAMS NEXT IN SEQUENCE. JUMP-TO-BEGINNING IOP JUMP TO BEGINNING OF PROGRAM OR LIVE, IF PROGRAM IS ONLINE. SUCCESSIVE DEPRESSIONS WILL JUMP BACKWARD TO END OF PROGRAMS PREVIOUSLY IN SEQUENCE, IF THEY STILL ARE ONLINE<br /><br />The following user interface mechanisms, or equivalent, are needed in support of the Remote Control: (1) pop-up channel information (e.g. when doing channel-up/channel-down), channel name and program time and program description. time-bar--a horizontal bar which shows time epoch, start time, end time, time-hash marks (e.g. every 15 minutes), and current play-out position-in-time. Both broadcast channels and VOD programs have time-bars (e.g. a VOD timeline begins at 0:00). sounds, e.g. key-click-feedback, invalid-operation (e.g. jump-forward when already at the end)<br /><br />Pop-Up status panel for long-latency operations, e.g. Two-dimensional program guide, with navigational controls. Menu based navigation--cursors, pg-up/pg-down, select. Hourglass display indicating a relatively long operation is being performed.<br /><br />One protocol implication has been identified. The requirement to show a time-bar with accurate resolution (e.g. to within one second) implies that a protocol mechanism is required to communicate the current time status of programs between the headend and the Box. The following design accomplishes the objectives: (1) The Box maintains a table of program start time and program end time. This table is populated by the out-of-band protocol interface. (2) The PCR contained within the MPEG stream enables the Box to compute the offset of the stream in time from the beginning of the program. (3) The absolute-time clock of the Box is synchronized with the time values contained within the MPEG stream through a TBD mechanism.<br /><br />The objectives of the remote control protocol (RCP) are to provide remote control of the box such that the box can, in the shortest time; (1) Tune to other active broadcast channels, not personal channels; (2) Activate and tune to other personal channels; and (3) Present good human factors A/V experience to the controlling user.<br /><br />The present invention has advantages over the prior art. Unique constraints of the present invention include: (1) Real-time storage of all digital programs to enable viewing control functions such as jump back, jump forward, and pause. Single-button return to start of program. (2) Two-way DOCSIS or DAVIC/DVB system, cable return path enabled. Each TV set has its own virtual channel between it and the headend. (3) Discovery that a two-way cable system cluster size, determined by noise constraints, allows each TV set to have access to sufficient bandwidth for a unique dynamically allocated digital TV channel. (4) The content of each signal is individually protected against theft. (5) synchronous Gigabit and 10 Gigabit Ethernet is used for transmission of digitized video between headend components, and certain elements in the system contain processing capabilities to restore video synchronization.<br /><br />DEFINITIONS<br /><br />1. Broadcast channel--an A/V stream bundle which is active and is viewable by one or more TVs. Such a stream is not being controlled by a remote control.<br /><br />2. Personal channel--an A/V stream that may or may not be active and which is viewable by only one TV. Such a stream is controlled by a remote control.<br /><br />3. MPEG stream synch--that moment in time in which sufficient Audio and Video frames have been received in order to consider the stream "in-synch" and capable of being played out to the user.<br /><br />4. A/V PID group--a group of PIDs that are used to reconstruct a real time audio/video stream.<br /><br />Signaling Tools<br /><br />Two-way OOB channel (DVB, DOCSIS, other) One-way MPEG stream channel (PIDs in 6 Mhz channelization)<br /><br />Design Considerations<br /><br />The OOB and MPEG INB channels have different bandwidths and latency characteristics.<br /><br />An end-to-end request/ack protocol must incur the combined latency of queuing, upstream scheduling, Server queuing, request processing, and downstream queuing and delivery--all mitigated by congestion. There is a significant probability that the INB stream will be in-synch before the arrival of the ACK.<br /><br />Delays may be introduced for obtaining the conditional access information (defer the issue for this draft).<br /><br />Options for Switching to a New A/V Stream<br /><br />1. Wait the complete Request/ACK cycle before looking for A/V stream synch (pessimistic). 2. Wait a fixed timeout before looking for A/V stream synch (optimistic). 3. Switch to a unique idle A/V PID group, and immediately begin looking for A/V stream synch (just-in-time) Note: 1 and 2 are the only options if the PID remains constant from channel to channel change.<br /><br />General<br /><br />1. If broadcast streams are active, a user may switch to an existing broadcast stream, and MPEG stream synch operation can immediately begin. 2. The first control function on a broadcast stream (e.g. pause, rewind) has the potential for switching to a personal-stream (if not already personal-stream). 3. There are more PIDs than streams in the address space, so a convention can be defined for PID switching (or the client can announce the next PIDs it intends to use out of a PID pool available to the client). 4. The Server will switch the A/V stream in a synchronized fashion, and then will ACK the request. However because the stream is being switched to a new PID the client can camp on the new A/V PID combo immediately. The ACK is needed for transmission and error cases.<br /><br />1. HFC MPEG QAM downstream one-way--1-2 ms 2. OOB (DOCSIS, DVB) round trip--20-30 ms 3. Internal Server round trip--5 ms 4. Congestion variance--1-2 sec (above MAC-layer timeout and retransmit)<br /><br />MiniBOX Procedure for Handling a Channel Change<br /><br />1. Disable AV 2. Send channel change request to Server 3. Switch to new A/V PID combo (may require ACK response) 4. Receive i-frames (via interrupts) and audio frames Enable A/V hardware<br /><br />The following is provided as an abstract reference definition of the stream processing elements required for processing MPEG program streams into output Transport streams. An MPEG stream processing system has the following functional MPEG stream processing elements (150 broadcast programs, 1280 personal programs) shown in FIG. 54.<br /><br />The flow of audio/video information through the stream processing components is depicted and the following sections outline the processing that is expected to take place at each step.<br /><br />MPEG2 transport stream capture: (1) 1000 Mb/S Ethernet--primary; (2) 3 (@250 Mbps per port)-15 (@40 Mbps per port) ASI ports--secondary; (3) DHEI ports--secondary; (4) Sonet--secondary; (5) 802.17 RPR--secondary (primary future).<br /><br />(1) Buffering for storage (2) Transport stream PSI processing (3) SI processing (DVB or DVS-???) (4) Program metadata computation (a) Rewind/Fast Forward preparation (b) Variable bit rate metadata (c) Television program identification and tagging.<br /><br />Protocol Transcoding (E.g. MPEG2->MPEG4)<br /><br />Concurrency Requirement is 150 Streams at 4 mbps Average Per Stream<br /><br />Store: (1) Write 150 MPEG2 broadcast streams into storage (each program stream is demultiplexed from the aggregate transport streams arriving via the input interface(s)) (2) Maintain a 2 hour circular buffer for broadcast (3) Store 150 MPEG2 files for Video-on-demand (not broadcast); (4) Read 1280 MPEG2 broadcast streams and/or files out of storage; (5) Operation mode is asynchronous (but must be able to process faster than the synchronized downstream output rate--buffers are used to absorb the timing variances); (6) Stream output is frame asynchronous<br /><br />Process: (1) Stream arrival is frame asynchronous; (2) Multiplex N programs per output channel; (3) Statistical multiplexing of VBR streams; (4) Rate transcoding (bit rate change); (5) Protocol transcoding (E.g. MPEG2->MPEG4); (6) Encryption; (7) Combination with external data; (8) PSI generation for output multiplexes; (9) SI generation (for inband); (10) Program guide generation (for inband); and (11) Auxiliary data stream.<br /><br />Authorizations<br /><br />Interactive Applications: Set top firmware updates; Stream arrival is asynchronous; Null packet generation; PCR correction; Stream output is frame synchronous.<br /><br />Stream input is frame synchronous (serial/parallel interface); QAM modulation; FEC computation and insertion; 128 6 Mhz channels (10 program streams per channel); (future) 96 8 Mhz channels (10-15 streams per channel, depending upon MPEG resolution); Output is frame synchronous (over RF)<br /><br />APPENDIX-Protocols Reference<br /><br />(1) MPEG (many specifications); (2) Cablelabs Digital Settop Gateway protocol; (3) File Transfer Protocol (FTP); (4) Multicast File Transfer Protocol; (5) IETF MPEG-in-IP/Ethernet encapsulation (WAN); (6) SNMP V2/V3; (7) DAVIC/DVB (for Host OOB RF Module); (8) Server-to-Input/Storage/Processor/Output protocol; (9) Remote Control Protocol<br /><br />(10) SNMP MIB definitions;<br /><br />DOCSIS 2.0 for OOB CMTS<br /><br />Latency Characteristics<br /><br />Latency Assessment<br /><br />The present invention is a network-based Broadcast-on-Demand (BOD) service with time-shifting functionality. A network-based disk array is used to implement the solution. This note studies the latency properties of the solution and describes user interface techniques possible for services.<br /><br />The latency and bandwidth characteristics of a network-based disk array with multiple active users (up to 1280) are different than a disk for a single-user PVR. In network-based arrays latency for origination of video streams can be in the 10's of seconds for a fully loaded system, and average bandwidth-per-user needs to be constrained to approximately 4 Mbps. In a single user PVR latency is in the 10's of milliseconds and local bandwidth (disk-to-display) is (essentially) unlimited.<br /><br />The following table describes the latency budget for starting a new network-based stream at a specific point-in-time as a function of number of active users (assuming a fully loaded system, 1280 streams maximum, 23 disks, track length of approximately 0.8 seconds of real time audio/video data per stream at 4 Mbps constant bit rate encoding): See Table on Active Users<br /><br />User Interface Techniques<br /><br />Given the above stream-initiation latency budget, the following user interface techniques are possible: A tradeoff can be made on startup latency vs. the precision of the starting time for stream initiation. For example, if a user interface operation requires starting at a specific time, then the maximum stream initiation latency must be anticipated (0-20 seconds). Or, if the user interface operation allows a less precise time (fast forward 15 minutes), then near-zero latency is possible (start time selection ranges plus or minus 0-20 seconds, depending on current queuing). A basic service can be defined which enables a user to watch broadcast and stored video in either real-time mode or time-shifted mode. In real-time mode multiple users will receive duplicate copies of a common stream. Time shifting mode (also called Personal-TV mode) will be entered when the user invokes the control to rewind back to the beginning.<br /><br />Another service is Pause and Restart. The Pause does not take effect visually until a request/ack response is received from the Server. While the request is pending PAUSING message is displayed. This is so that the Server can halt its streaming buffer operation and not lose any image data. An alternative design is for the Server to replenish the data lost (streamed out) from the computed pause time back into the front end of the buffer. In this latter design the restart time is as the Max SIDL time because of the latency required to retrieve the data from storage.<br /><br />Another service is immediate Rewind and Forward. No fast-image display is played out. Instead a clock or seconds-rewound indicator is presented. No pre-processing and duplicate storage mechanism (see below) is implemented.<br /><br />Another service is jump-forward and jump-backward (e.g. in 15 minute increments). In this service the stream can be initiated nearly immediately (with SIDL variance in the point of time where the stream is initiated).<br /><br />Over time the latency budget will continue to be reduced. Also a future version of the system is can perform re-encoding processing in order to derive fast forward and fast rewind streams. With these enhancements the user interface can approach the functionality of local PVR and local disk storage.<br /><br />Glossary<br /><br />Analog Signal: A method of signal transmission in which information is relayed by continuously altering the waveform of the electromagnetic current. The characteristics quantity representing information may at any instant assume any value within a continuous interval.<br /><br />ASI: Asynchronous Serial Interface. A high speed interface used to carry synchronous MPEG transport streams.<br /><br />ATSC: Advanced Television Systems Committee. Establishes voluntary technical standards for advanced television systems, including digital high definition television (HDTV).<br /><br />Blackout: Blackout restrictions can block viewers in a certain geographic area, or viewers who fit other criteria defined by the broadcaster, from watching certain programs.<br /><br />Broadcast-on-Demand (BOD): a network-based service in which broadcast video channels are only broadcast on the final HFC segment if one or more subscribers request the channel.<br /><br />Chaining: Chaining is a method of transferring subscriber entitlements from an old viewing card to a new card during card changeovers.<br /><br />CM: Cable Modem. The subscriber equipment which provides two-way connectivity between the subscriber premises (Ethernet, USB, wireless, . . . ) and the internetworking backbone.<br /><br />CMTS: Cable Modem Termination System, a headend component of the cable return path technology used in the DOCSIS standard. The CMTS provides a two way forwarding path for Ethernet data plus Quality of Service features for sharing the bandwidth between subscribers.<br /><br />Compression System: Responsible for compressing and multiplexing the video/audio/data bit streams, together with the authorization stream. The multiplexed data stream is then transmitted to the satellite, cable, or digital terrestrial headend.<br /><br />Conditional Access Service ID: The identifier for a conditional access event. A single conditional access event can be divided into blocks with different types of access restriction (e.g., the first 5 minutes clear and purchasable; next 10 minutes scrambled and purchasable; the rest of the show scrambled).<br /><br />Conditional Access: The security technology used to control the access to broadcast information, including video and audio, interactive services, etc. Access is restricted to authorized subscribers through the transmission of encrypted signals and the programmable regulation of their decryption by a system such as viewing cards.<br /><br />Control Word: The key used in the encryption or decryption of a data stream.<br /><br />Crypto-period: A crypto-period is a regular time interval during which a control word is valid. A crypto-period is typically only a few seconds long. Also called a Key Period.<br /><br />Data Services: Data services provided over cable frequently include Internet and e-mail access, and can include delivery of a wide range of non-video information to subscribers. DAVIC (Digital audio video council): The DVB-RC standard for cable return path, expected to be widely used in markets where DVB standards apply.<br /><br />Digital Signal: A discretely timed signal in which information is represented by a finite number of defined discrete values that its characteristic quantities may take in time.<br /><br />DAVIC: See DVB.<br /><br />Digital Channel: A group of MPEG streams consisting of elemental streams for audio and video. Multiple digital channels can be carried in a single Transport Stream.<br /><br />DOCSIS (Data over cable service interface specification): A standard for standalone cable modem communications, usually used with Internet or PCs, developed for the U.S. market.<br /><br />DVB: Digital Video Broadcasting. A European project that has defined transmission standards for digital broadcasting systems using satellite (DVB-S), cable (DVB-C) and terrestrial (DVB-T) media. The standards are created by the EP-DVB group and approved by the ITU. Specifies modulation, error correction, etc. An earlier set of return channel specifications (OOB and INB) was originally released under an organization called DAVIC.<br /><br />Electronic Program Guide: An on-screen guide to programs and services available to subscribers. The electronic program guide is a software application which runs inside the digital set-top box and is controlled by the use of a specially designed remote control. It allows the subscriber to view program schedule information, store favorite channels, `book` programs for later viewing, purchase current and future pay-per-view events, read messages from the subscriber management system, and adjust set-top box settings.<br /><br />Entitlement Control Message Generator: System component responsible for generating entitlement control messages and control words from conditional access information on the current programs; updating the entitlement control message and control word every crypto-period; and delivering them to the multiplexer.<br /><br />Entitlement Control Message (ECM): A packet that contains information the viewing card needs to determine the control word (or seed) that decrypts the picture.<br /><br />Entitlement Management Message Generator: The component of the conditional access headend that delivers entitlements to the multiplexers. Acting on commands from the subscriber management system, it creates entitlement management messages for broadcast to the viewing cards or for relaying to cable operators. It then forwards the entitlement management messages to the multiplexers. The Entitlement Management Message Generator includes the subscriber database, which is a subset of the information held in the subscriber management system database.<br /><br />Entitlement Management Message (EMM): A packet containing private conditional access information that specifies the authorization levels or the services of specific decoders. Entitlement management messages deliver viewing authorizations to the subscriber's card.<br /><br />Grooming: A processing function applied to the compressed MPEG audio/video stream in which the rate is adjusted higher or lower by decompressing and reprocessing the video. A synonym is rate shaping.<br /><br />In-band channels (INB): In-band channels or frequencies are those that contain content broadcast to subscribers. This can be audio, video, data, or other content. INA: Interactive Network Adapter. A headend component of the cable return path technology used in the DAVIC standard.<br /><br />IP: Internet Protocol.<br /><br />IP Telephony: The ability to provide local telephone services via the cable infrastructure.<br /><br />MAC: Media access control--all protocol procedures for communicating at layer two between adjacent entities connected to a common media.<br /><br />Macrovision: Copy protection system that allows consumers to view, but not record, programs that are distributed via digital STBs. The system adds a copy protection waveform to the video signal that is transparent on original program viewing, but causes copies made on VCRs to be degraded to the extent that they no longer have entertainment value. Middleware: The layer of software that supports the user interface and interactive applications in the set-top box, and isolates the application from the particular hardware of a set-top box platform.<br /><br />MPEG: Moving Pictures Experts Group. The name of the ISO/IEC working group that sets up the international standards for digital television source coding.<br /><br />MPEG-2: Industry standard for video and audio source coding using compression and multiplexing techniques to minimize video signal bit-rate in preparation for broadcasting. Supersedes the MPEG-1 standard. The standard is split into layers and profiles defining bit-rates and picture resolutions.<br /><br />MPTS: Multi Program Transport Stream. See SPTS.<br /><br />MSO: Multiple Service Operator. A cable operator with several headends, perhaps across many geographic regions.<br /><br />Module: Point-of-deployment modules are removable conditional access devices that would make it possible for one set-top box to be used in many cable markets. All hardware and software required for the conditional access system is included inside this removable module rather than built into the set-top box.<br /><br />MTA: Media Terminal Adapter. A function which provides voice over IP service to one or more telephony ports. The MTA may be standalone or embedded within a cable modem.<br /><br />On Screen Display (OSD): the display of graphics and text generated by applications within the set top box platform. OSD may be superimposed on video or may replace the video and occupy the whole screen. OSD may be translucent (video showing through regions with no text or graphics) or may be opaque (no video showing).<br /><br />OOB Channels: Out-of-band channels are channels that are not used for broadcasting content to subscribers. The ability to broadcast entitlement information in OOB channels ensures that subscriber cards receive this information even if the set-top box is tuned to an analog signal.<br /><br />OpenCable: A CableLabs.RTM. project aimed at obtaining a new generation of interoperable set-top boxes for the U.S. market, to enable a new range of interactive services to be provided to cable subscribers.<br /><br />OpenCAS: A committee that is defining conditional access standards for the U.S. market. Its work has been submitted for review and approval to SCTE/DVS (Society of Cable Television Engineers/Digital Video Subcommittee).<br /><br />Out-of-band Channels (OOB): upstream and downstream channels that carry signaling information and interactive traffic. OOB channels are concurrent with In-band channels (INB) but use separate frequencies.<br /><br />PHY--Physical layer interface, e.g. QAM or QPSK. All physical and logical elements required for transport of a serial bit stream across a common media, e.g. HFC.<br /><br />Program Clock Reference (PCR): A time stamp in the Transport Stream from which decoder timing is derived.<br /><br />PSIP: Program and System Information Protocol. ATSC term for the metadata used to describe events. Similar to DVB SI.<br /><br />QAM: Quadrature Amplitude Modulation. A method of modulating digital signals that uses combined techniques of phase modulation and amplitude modulation. It is particularly suited to cable networks.<br /><br />Rate Shaping: See grooming.<br /><br />Report Back: An automatic function that reports IPPV purchases to the EMM Generator, via a telephone or cable modem connection.<br /><br />Security Server: A computer that attaches a digital signature to each conditional access packet before that packet can be broadcast, and provides scrambling control words. The signature is used to verify the validity of the packet.<br /><br />Session-based Encryption: The ability to encrypt video-on-demand content per viewing session rather than per stream or in real-time. This enables cable operators to protect content, providing decryption rights to only a single viewer. The viewer can pause and resume viewing, if the video-on-demand Server supports such functionality.<br /><br />Set-top Box: The receiver unit, with an internal decoder, which sits on top of the television set and is connected to the television set. It receives and demultiplexes the incoming satellite signal and decrypts it when provided a control word by the viewing card.<br /><br />SI: Service Information (DVB term). Data used by the electronic program guide to display information about programs. Typically includes time of broadcast, title, etc. The ATSC is equivalent is PSIP (Program and System Information Protocol).<br /><br />Simulcrypt: The co-existence of multiple conditional access systems on a single transmission service. In other words, using Simulcrypt, a cable operator can provide the same programming to two or more subscriber populations.<br /><br />SONET: Synchronous Optical NETwork. A fiber-optic transmission system for high-speed digital traffic. Employed by telephone companies and common carriers, SONET speeds range from 51 megabits to multiple gigabits per second. SONET is an intelligent system that provides advanced network management and a standard optical interface.<br /><br />SPTS: Single Program Transport Stream. See MPTS.<br /><br />Subscriber Management System: A system that handles the maintenance, billing, control and general supervision of subscribers to conditional access technology viewing services provided through cable and satellite broadcasting.<br /><br />Transcoding: Conversion of the compressed MPEG stream from one format to a different format, e.g. MPEG 2 to MPEG4.<br /><br />Verifier: Conditional access software module embedded in a set-top box which handles the logical interface to the viewing card and passes entitlement control messages, entitlement management messages, and other conditional access and subscriber information to the card.<br /><br />Video-on-demand: A method of providing video services to viewers under their control, permitting them to choose what they want to view and when. Video-on-demand often includes the ability to pause viewing and resume, even tuning to other channels before resuming viewing of the video-on-demand event.<br /><br />Viewing Card: A credit-card sized programmable card. A conditional access security device in the subscriber's home, it receives and records entitlements from the broadcaster headend and checks these against the incoming program information in the entitlement control messages. If the subscriber is authorized by the to view the current program, the card provides the control word to the set-top box. (Also known generally as a smart card or subscriber access card.)<br /><br />It will be apparent to those skilled in the art that various modifications can be made to the method and apparatus for viewer control of digital TV program start time of the instant invention without departing from the scope or spirit of the invention, and it is intended that the present invention cover modifications and variations of the method and apparatus for viewer control of digital TV program start time provided they come within the scope of the appended claims and their equivalents.<br /><br /><center><b>* * * * *</b></center><center><br /></center><center><iframe bordercolor="#000000" frameborder="0" height="640" hspace="0" marginheight="0" marginwidth="0" scrolling="no" src="http://ad.doubleclick.net/adi/N7433.148119.BLOGGEREN/B6631428.262;sz=640x640;ord=[timestamp]?;lid=41000000005217789;pid=22347453;usg=AFHzDLuVqKA7E_HKxas-ru_HJZK3XKD0cQ;adurl=http%253A%252F%252Fwww.officemax.com%252Ftechnology%252Fkeyboard-mouse-accessories%252Fkeyboard-mouse-sets%252Fproduct-prod3150002%253Fcm_mmc%253DPerformics-_-Technology-_-Keyboard%252C%252520Mouse%252520and%252520Accessories-_-Keyboard%252520and%252520Mouse%252520Sets%2526ci_src%253D14110944%2526ci_sku%253D22347453;pubid=548750;imgsrc=http%3A%2F%2Fwww.officemax.com%2Fcatalog%2Fimages%2F397x353%2F22347453i_01.jpg;width=640;height=569" vspace="0" width="640"></iframe></center></coma></div>
Kevin Andrew Woolseyhttp://www.blogger.com/profile/01268449682429697653noreply@blogger.com0tag:blogger.com,1999:blog-3776716555337472667.post-29479497798994493912012-06-03T20:48:00.002-07:002012-06-03T20:54:29.947-07:00Method and terminal for providing IPTV to multiple IMS users<div dir="ltr" style="text-align: left;" trbidi="on">
<table><tbody>
<tr><td align="LEFT" width="50%"><b>United States Patent</b></td><td align="RIGHT" width="50%"><b><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-adv.htm&r=1&f=G&l=50&d=PALL&S1=8,191,100.PN.&OS=PN/8,191,100&RS=PN/8,191,100#h0" name="h1"></a><a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-adv.htm&r=1&f=G&l=50&d=PALL&S1=8,191,100.PN.&OS=PN/8,191,100&RS=PN/8,191,100#h2"></a><b><i></i></b>8,191,100</b></td></tr>
<tr><td align="LEFT" width="50%"><b>Lindquist , et al.</b></td><td align="RIGHT" width="50%"><b>May 29, 2012</b></td></tr>
</tbody></table>
<hr style="text-align: -webkit-auto;" />
<span style="text-align: -webkit-auto;">Method and terminal for providing IPTV to multiple IMS users </span><br />
<br />
<br />
<center><b>Abstract</b></center><br />
<div style="text-align: -webkit-auto;">
A method and terminal for providing Internet Protocol Television (IPTV) and other communication services to a group of users, such as a family, using an IP Multimedia Subsystem (IMS) network. A group private user identity is associated with a group public user identity and with a plurality of individual public user identities, each of which is associated with a different user in the group. Utilizing the group private User ID and the group public User ID, a browser registers a group subscription with the IMS network. When an individual user enters an identifier such as a PIN, the individual is then registered with the IMS network, while maintaining the group registration with the IMS network and the IPTV network. Individual users can be changed without having to restart the browser.</div>
<hr style="text-align: -webkit-auto;" />
<table><tbody>
<tr><td align="LEFT" valign="TOP" width="10%">Inventors:</td><td align="LEFT" width="90%"><b>Lindquist; Jan Erik</b> (Alvsjo, <b>SE</b>)<b>, Persson; Fredrik</b> (Alvsjo, <b>SE</b>)</td></tr>
<tr><td align="LEFT" valign="TOP" width="10%">Assignee:</td><td align="LEFT" width="90%"><b>Telefonaktiebolaget L M Ericsson (Publ)</b> (Stockholm, <b>SE</b>) </td></tr>
<tr><td align="LEFT" nowrap="" valign="TOP" width="10%">Appl. No.:</td><td align="LEFT" width="90%"><b>12/236,673</b></td></tr>
<tr><td align="LEFT" valign="TOP" width="10%">Filed:</td><td align="LEFT" width="90%"><b>September 24, 2008</b></td></tr>
</tbody></table>
<hr style="text-align: -webkit-auto;" />
<br />
<center><b>Related U.S. Patent Documents</b></center><br />
<hr style="text-align: -webkit-auto;" />
<td< style="text-align: -webkit-auto;" td=""></td<><td< style="text-align: -webkit-auto;" td=""></td<><br />
<table><tbody>
<tr><td width="7%"></td><td></td><td></td><td></td><td></td><td></td></tr>
<tr><td align="left"></td><td align="center"><b><u>Application Number</u></b></td><td align="center"><b><u>Filing Date</u></b></td><td align="center"><b><u>Patent Number</u></b></td><td align="center"><b><u>Issue Date</u></b></td></tr>
<tr><td align="center"></td><td align="center">61058793</td><td align="center">Jun., 2008</td><td align="center"></td><td align="center"></td></tr>
<tr><td align="center"></td></tr>
</tbody></table>
<hr style="text-align: -webkit-auto;" />
<div style="text-align: -webkit-auto;">
</div>
<table><tbody>
<tr><td align="LEFT" valign="TOP" width="40%"><b>Current U.S. Class:</b></td><td align="RIGHT" valign="TOP" width="80%"><b>725/110</b> ; 725/105; 725/109</td></tr>
<tr><td align="LEFT" valign="TOP" width="40%"><b>Current International Class:</b></td><td align="RIGHT" valign="TOP" width="80%">H04N 7/173 (20060101)</td></tr>
<tr><td align="LEFT" valign="TOP" width="40%"><b>Field of Search:</b></td><td align="RIGHT" valign="TOP" width="80%">725/110</td></tr>
</tbody></table>
<br />
<hr style="text-align: -webkit-auto;" />
<br />
<center><b>References Cited <a href="http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2Fsearch-adv.htm&r=0&f=S&l=50&d=PALL&Query=ref/8191100">[Referenced By]</a></b></center><br />
<hr style="text-align: -webkit-auto;" />
<br />
<center><b>U.S. Patent Documents</b></center><br />
<table><tbody>
<tr><td width="33%"></td><td width="33%"></td><td width="34%"></td></tr>
<tr><td align="left"><a href="http://appft1.uspto.gov/netacgi/nph-Parser?TERM1=20070121869&Sect1=PTO1&Sect2=HITOFF&d=PG01&p=1&u=%2Fnetahtml%2FPTO%2Fsrchnum.html&r=0&f=S&l=50" target="_blank">2007/0121869</a></td><td align="left">May 2007</td><td align="left">Gorti et al.</td></tr>
<tr><td align="left"><a href="http://appft1.uspto.gov/netacgi/nph-Parser?TERM1=20070199015&Sect1=PTO1&Sect2=HITOFF&d=PG01&p=1&u=%2Fnetahtml%2FPTO%2Fsrchnum.html&r=0&f=S&l=50" target="_blank">2007/0199015</a></td><td align="left">August 2007</td><td align="left">Lopez et al.</td></tr>
<tr><td align="left"><a href="http://appft1.uspto.gov/netacgi/nph-Parser?TERM1=20080022322&Sect1=PTO1&Sect2=HITOFF&d=PG01&p=1&u=%2Fnetahtml%2FPTO%2Fsrchnum.html&r=0&f=S&l=50" target="_blank">2008/0022322</a></td><td align="left">January 2008</td><td align="left">Grannan et al.</td></tr>
<tr><td align="left"></td></tr>
</tbody></table>
<br />
<center><b>Foreign Patent Documents</b></center><br />
<table><tbody>
<tr><td></td><td></td><td></td><td></td><td></td><td></td></tr>
<tr><td align="left"></td><td align="left">WO 0049801</td><td></td><td align="left">Aug., 2000</td><td></td><td align="left">WO</td></tr>
<tr><td align="left"></td></tr>
</tbody></table>
<br />
<br />
<center><b>Other References</b></center><br />
<table><tbody>
<tr><td><align=left><br />Telecommunications and Internet Converged Services and Protocols for Advanced Networking (TISPAN); Service Layer Requirements to Integrate NGN Services and IPTV; draft ETSI TS 181 016, ETSI Standards v. 0.0.5, Feb. 1, 2007, pp. 11-24. cited by other.</align=left></td></tr>
</tbody></table>
<br />
<i style="text-align: -webkit-auto;">Primary Examiner:</i><span style="background-color: white; text-align: -webkit-auto;"> Pendleton; Brian T. </span><br />
<i style="text-align: -webkit-auto;">Assistant Examiner:</i><span style="background-color: white; text-align: -webkit-auto;"> Idowu; Olugbenga </span><br />
<hr style="text-align: -webkit-auto;" />
<br />
<center><b><i>Parent Case Text</i></b></center><br />
<hr style="text-align: -webkit-auto;" />
<br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">CROSS-REFERENCE TO RELATED APPLICATIONS </span><br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">This application claims the benefit of U.S. Provisional Application No. 61/058,793 filed Jun. 4, 2008, the disclosure of which is incorporated herein in its entirety.</span><br />
<hr style="text-align: -webkit-auto;" />
<br />
<center><b><i>Claims</i></b></center><br />
<hr style="text-align: -webkit-auto;" />
<br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">What is claimed is:</span><br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">1. A method of providing Internet Protocol Television (IPTV) and other communication services to a group of users of an IP Multimedia Subsystem (IMS) network, said method comprising the steps of: associating in a local object code of a terminal, a group private user identity with a group public user identity and with a plurality of individual public user identities, each individual public user identity being associated with a different user in the group; registering a group subscription with the IMS network utilizing the group private user identity and the group public user identity; obtaining by the local object code, an address of an IPTV Portal from the IMS network; initiating IPTV service for the group subscription through the address of the IPTV Portal; and registering a first individual user with the IMS network only when the individual public user identity associated with the first individual user is received by the terminal, while maintaining the group registration with the IMS network, thereby providing the first user with IPTV service without restarting a browser in the terminal. </span><br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">2. The method as recited in claim 1, further comprising the steps of: receiving in the terminal, an indication of a change of user from the first individual user to a second individual user; and registering the second individual user with the IMS network in response to the user change, while maintaining the group registration with the IMS network, thereby providing the second user with IPTV service without restarting the browser. </span><br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">3. The method as recited in claim 2, wherein the step of receiving an indication of a change of user includes receiving a personal identification number (PIN) of the second individual user. </span><br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">4. The method as recited in claim 3, wherein the terminal is a set top box. </span><br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">5. The method as recited in claim 4, further comprising pre-configuring the set top box with the group public user identity, group private user identity and group password, and individual identities of the users in the group. </span><br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">6. The method as recited in claim 5, wherein the pre-configuring step is performed automatically by connecting the set top box to an IPTV bootstrap server, which provides the group public user identity and the password to the set top box. </span><br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">7. The method as recited in claim 3, wherein the terminal is a mobile terminal. </span><br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">8. A terminal for providing Internet Protocol Television (IPTV) and other communication services to a group of users of an IP Multimedia Subsystem (IMS) network, said terminal comprising: at least one processor; a non-transitory computer-readable storage medium further comprising computer-readable instructions, when executed by the at least one processor, are configured for: associating a group private user identity with a group public user identity and with a plurality of individual public user identities, each individual public user identity being associated with a different user in the group; registering a group subscription with the IMS network utilizing the group private user identity and the group public user identity; obtaining an address of an IPTV Portal from the IMS network; initiating IPTV service for the group; and registering a first individual user with the IMS network only when the individual public user identity associated with the first individual user is received, while maintaining the group registration with the IMS network, thereby providing the first user with IPTV service without restarting a browser in the terminal. </span><br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">9. The terminal as recited in claim 8, wherein the computer-readable instructions, when executed by the at least one processor, are further configured for: receiving an indication of a change of user from the first individual user to a second individual user; and registering the second individual user with the IMS network in response to the user change, while maintaining the group registration with the IMS network, thereby providing the second user with IPTV service without restarting the browser. </span><br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">10. The terminal as recited in claim 9, wherein the computer-readable instructions configured for receiving an indication of a change of user are further configured for receiving a personal identification number (PIN) of the second individual user. </span><br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">11. The terminal as recited in claim 10, wherein the browser includes an Application Programming Interface (API) for controlling local object code to register the group subscription with the IMS network, and to maintain the group registration while changing individual users. </span><br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">12. The terminal as recited in claim 10, wherein the terminal is a set top box. </span><br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">13. The terminal as recited in claim 12, wherein the computer-readable instructions are further configured for pre-configuring the set top box with the group public user identity and a password. </span><br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">14. The terminal as recited in claim 13, wherein the computer-readable instructions configured for pre-configuring the set top box are further configured for automatically connecting the set top box to an IPTV bootstrap server, which provides the group public user identity and the password to the set top box. </span><br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">15. The terminal as recited in claim 10, wherein the terminal is a mobile terminal.</span><br />
<hr style="text-align: -webkit-auto;" />
<br />
<center><b><i>Description</i></b></center><br />
<hr style="text-align: -webkit-auto;" />
<br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH OR DEVELOPMENT </span><br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">Not Applicable </span><br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">REFERENCE TO SEQUENCE LISTING, A TABLE, OR A COMPUTER PROGRAM LISTING COMPACT DISC APPENDIX </span><br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">Not Applicable </span><br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">BACKGROUND </span><br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">The present invention relates to communication systems. More particularly, and not by way of limitation, the present invention is directed to a method and terminal for providing Internet Protocol Television (IPTV) and other services to multiple users of an IP Multimedia Subsystem (IMS) network. </span><br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">Conventionally, IPTV is delivered to the home through a broadband connection from an IPTV service provider to a Set Top Box (STB) connected to a television set. The STB includes a browser, and all IPTV features are controlled over a browser interface. Thus, the IPTV delivery process is browser-centric. A Session Initiation Protocol (SIP) User Agent in the STB adapts IPTV to IMS requirements. </span><br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">A problem with adapting IPTV to IMS is that the browser currently has no concept of individual IMS users. So in the case of a group of users such as a family having a family subscription with different user accounts for the individual family members, the browser must be restarted every time a family member logs into a different user account within the family subscription. The user experience is adversely affected by having to restart the browser when simply switching users. </span><br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">SUMMARY </span><br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">The user experience would be improved by eliminating the requirement to restart the browser when switching users, while still providing the full functionality of the IMS network so that combined services (service blending) with services such as Presence, Messaging, and Chat services can be provided as well. The present invention provides a method and terminal for achieving these results. </span><br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">The present invention provides simplified provisioning of the STB with IMS user information, quicker response time when switching users in the browser, and closer control of the user experience by the service provider with control of the PIN, aliases, and adding/removing new users in a group. A single IMS Private User Identity (IMPI) and password is shared by the whole subscription. </span><br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">Thus, in one aspect, the present invention is directed to a method of providing Internet Protocol Television (IPTV) and other communication services to a group of users of an IP Multimedia Subsystem (IMS) network. The method includes the steps of associating a group private user identity with a group public user identity and with a plurality of individual public user identities, each associated with a different user in the group; registering a group subscription with the IMS network and an IPTV network utilizing the group private user identity and the group public user identity; and registering a first individual user with the IMS network only when the individual public user identity associated with the first individual user is received, while maintaining the group registration with the IMS network and the IPTV network. The method may also include receiving an indication of a change of user from the first individual user to a second individual user; and registering the second individual user with the IMS network in response to the user change, while maintaining the group registration with the IMS network and the IPTV network. </span><br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">In another aspect, the present invention is directed to a terminal for providing IPTV and other communication services to a group of users of an IMS network. The terminal includes means for associating a group private user identity with a group public user identity and with a plurality of individual public user identities, each associated with a different user in the group; means for registering a group subscription with the IMS network and an IPTV portal utilizing the group private user identity and the group public user identity; and means for registering a first individual user with the IMS network only when the individual public user identity associated with the first individual user is received, while maintaining the group registration with the IMS network and the IPTV network. </span><br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWING </span><br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">In the following section, the invention will be described with reference to exemplary embodiments illustrated in the figures, in which: </span><br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">FIG. 1 illustrates the one-to-one relationship between private and public user identities in the IP Multimedia Subsystem (IMS) 3GPP specification; </span><br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">FIG. 2 is a functional block diagram illustrating the relationships between the IMS subscription, IMS service profiles, and private and public user identities in an embodiment of the present invention; </span><br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">FIG. 3 is a functional block diagram of a terminal, which includes a browser and local object code in an exemplary embodiment of the present invention; </span><br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">FIG. 4 illustrates a bootstrapping process by which an STB is provisioned with a Public Group Subscription User ID, Private User ID, and password; </span><br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">FIG. 5 is a signaling diagram illustrating the steps of a process in which the Group Subscription User ID is registered with IMS, the local object code learns the address of an IPTV Application Program (IAP), and IPTV is initiated; and </span><br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">FIG. 6 is a signaling diagram illustrating the steps of a process of changing users according to the teachings of the present invention. </span><br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">DETAILED DESCRIPTION </span><br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">The present invention provides a method and terminal for providing IPTV to multiple IMS users without requiring the browser to be restarted when switching between users. Full IMS functionality is also maintained. </span><br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">In an exemplary embodiment, the browser is initiated on a family or group subscription account and never changes accounts. The browser connects to a portal, which keeps track of all the individual accounts under the group subscription and controls the switching between users in the group transparently to the browser. This enables the portal to control the PIN code for the group subscription (typically a simple 4 digit number). Additional benefits are explained below. </span><br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">In one embodiment, an Application Programming Interface (API) is introduced into the browser based, for example, on javascript properties and methods. The API controls the local object code to register different users to IMS. Thus, the group subscription account is always logged into IMS and to the portal, while individual group members are logged on to IMS only when those individual accounts are invoked. </span><br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">FIG. 1 illustrates the one-to-one relationship between private and public user identities in the IP Multimedia Subsystem (IMS) 3GPP specification. Note that the subscription or default user is always associated with the browser. Once the browser is initiated and communication is established with the IPTV Network portal, the methods in Table 1 below may be performed. </span><br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">TABLE-US-00001 TABLE 1 logoffUser De-registration from IMS getRegisteredUsers The list of users that are registered with IMS is indicated. The default user used for registration and subscription to IPTV service is not listed. </span><br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">Table 2 below illustrates User Access Control Procedures for Properties. </span><br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">TABLE-US-00002 TABLE 2 Property Procedures userId When initiating a feature like broadcast playLive, if userId property is specified, then the indicated user identifier is used for the session initiation. If it is not specified, then the default userId related to initiating the browser is used. </span><br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">FIG. 2 is a functional block diagram illustrating the relationships between the IMS subscription 11, IMS service profiles 12.sub.1-12.sub.5, and private and public user identities in an embodiment of the present invention. Assuming the browser in the STB totally controls the login process for IMS, it is not necessary to have different IMS credentials for each user in the group. The same IMS credentials, i.e., password and IMS Private User Identity (IMPI) 13 are shared for the whole subscription. A plurality of implicitly registered IMS Public User Identities (IMPUs) 14.sub.1-14.sub.5 are associated with the shared IMPI to reach a number of different service profiles. </span><br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">Thus, it is only necessary to provision in the STB, the group subscription account with the password and shared IMPI 13. The individual accounts are dynamically indicated over the browser javascript API. New accounts in the group can be added or removed without having to manually update the STB with the new user information. This would otherwise be a tedious process with the user having to insert long strings of characters using a basic remote control. </span><br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">In another embodiment, the browser may be implemented in a mobile terminal. The portal may be implemented in an IAP IPTV Application platform which is an Application Server (AS). </span><br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">The Group account (User id 1) is the default account which is connected to the IPTV portal. All communication with the portal takes place over this account. The browser indicates for individual services which User ID is invoking the requested service, such as play or playLive. Private User IDs (IMPIs) for the individual users in the device are optional. IMPIs may be desirable if other local services are available in the device not controlled from the browser, or if the operator desires stricter security than that provided by the PIN. All users invoked from the browser use the Group Private User ID if no individual IMPIs are defined. </span><br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">A benefit of this relationship is that the password is only exchanged at bootstrap and the password is not exposed outside of that bootstrap. Additionally, only one User ID needs to be downloaded during bootstrap of even manual configuration of the Public/Private User ID. The browser can control which users are available within the same IPTV subscription. Each user's PIN then becomes the security preventing illegal usage of the different user accounts like a guardian account. </span><br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">The User ID and password may be pre-configured in the STB. The information may be pre-configured in the STB manually or automatically. If done manually, the User ID and password are inserted manually when the STB connects to the IPTV Network. If done automatically, the STB may automatically connect to an IPTV bootstrap server, which provides an xml file with the User ID and password associated with the STB. </span><br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">For both the manual and automatic methods, it is necessary to register the STB in the IPTV Network. To do so, a set of registration parameters regarding each STB is made available to the operator. The parameters include a Public STB Identifier (for example MAC address) and a Private STB Identifier (a unique key to each STB not visible to the user). The Public STB ID is used to register the equipment against the customer account or subscription. The equipment cannot be used by a customer without registration. The Private STB ID is used in the automatic configuration of the User ID and password which are delivered during IPTV bootstrapping. </span><br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">In IMS, a User Access Control function controls the logging in of users. The initialization of the browser is associated with a default user, while each feature that is invoked may be associated with a different user. The RegisteredUsers data object represents a list of users that are currently registered with IMS. Items in the data object can be accessed using array notation. Table 3 below illustrates the properties of the RegisteredUsers data object. </span><br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">TABLE-US-00003 TABLE 3 readonly String userId The user identifier represents the public user identity </span><br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">Tables 4a-4c below illustrate methods of logging users on and off IMS and getting registered users. </span><br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">TABLE-US-00004 TABLE 4a Integer logonUser (String userId) Description The indicated user shall be registered in IMS. Attributes userId The user identifier. </span><br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">TABLE-US-00005 TABLE 4b Boolean logoffUser (String userId) Description The indicated user is de-registered from IMS. Any sessions that may be open are closed. Attributes userId The user identifier. </span><br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">TABLE-US-00006 TABLE 4c RegisteredUsers getRegisteredUsers ( ) Description The STB returns all the users that are registered with IMS through this interface. Attributes </span><br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">FIG. 3 is a functional block diagram of a terminal 20, which includes a browser 21 and local object code 22 in an exemplary embodiment of the present invention. The browser and local object code may be implemented in a STB or an OITF compliant device. The browser provides the presentation to the user of the IPTV services. Within the browser, a number of exemplary javascript objects are shown: a Video on Demand (VoD) javascript object 23, a Broadcast javascript object 24, a LPVR javascript object 25, and an Other Services javascript object 26. The javascript objects may utilize a standardized interface, aligned with the OITF API. The javascript objects then communicate with the local object code 22 which then provides all the necessary procedures and signaling with SIP, RTSP and HTTP to realize the services. </span><br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">The local object code 22 is shown as being divided into a number of exemplary logical functional modules. A User Handling Function module 27 keeps track of the user(s) that are defined in the device. Once defined, any authentication that is required by any interface is retrieved from the User Handling Function module. For IMS deployments, it is necessary to perform registration with IMS prior to providing any IPTV or other IMS service. Multiple users may be registered. Note that only one user is associated with the browser, which is initiated by an IPTV Service Discovery Function module 28. </span><br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">The IPTV Service Discovery Function module 28 performs the IPTV service discovery, which is the subscription to IPTV service in IMS. The response to the subscription provides the portal address and the URL to fetch the IPTV broadcast channel information. The broadcast channel information provides the details necessary to signal the Internet Group Management Protocol (IGMP) requests by an IPTV Media Control Function module 29. The portal address is used to start up the browser and load the initial IPTV presentation. </span><br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">The IPTV Media Control Function module 29 controls unicast and multicast streams. For IMS deployments, the session initiation and teardown process is performed with SIP. For plain IPTV deployments, the session setup and teardown process is performed with RTSP. The media playback is performed with RTSP, while the broadcast selection and zapping is performed using IGMP. </span><br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">An Other IMS Services Function module 31 attempts to capture the services not directly provided by IPTV, such as Presence, Messaging, and Chat services. These services have no direct media interaction except with the browser or local client software implementing the service. </span><br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">An optional IPTV Bootstrap Function module 32 facilitates easier deployments of IPTV. At startup, the device connects to a preconfigured URL with a hardware identifier and an encrypted key, which is used to tie to a subscription. If the subscription is associated to hardware and the key is confirmed, then an xml file is downloaded with details of user accounts and password. </span><br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">A Local Player Function module 33 may provide local storage control, provided there is a local hard disk, for broadcasted information both multicast and DVB-T streams. </span><br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">A Hybrid System Function module 34 indicates when there may be other types of delivery of TV over terrestrial, satellite, and cable. In order for IPTV services to be integrated with a hybrid system, it is necessary to have access to the channel identifier so the presentation can be customized for those channels. </span><br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">Prior to the browser 21 being initiated for IPTV, the local object code 22 performs the initial registration to IMS, as well as subscribing to the IPTV service using the default user. The default user includes the Public User Identity (IMPU) 14, the Private User Identity (IMPI) 13, and the password (IMS credentials). This information is either typed in manually or is automatically retrieved from the IPTV Bootstrap Server. If manually inserted, the Private User Identity is the same value as the Public User Identity. </span><br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">FIG. 4 illustrates a bootstrapping process by which an STB is provisioned with a Public Group Subscription User ID, Private User ID, and password. The process is performed by the local object code 22, a wireless application protocol proxy referred to herein as a Mobile Internet Enabling Proxy (MIEP) 42, and an IPTV Bootstrap Server, which includes an IPTV Application Program (IAP) 43. At step 44, the STB is preconfigured with a public STB ID such as a MAC address, a private STB ID, and a default IPTV portal address for initial connection. The private STB ID is not visible in the STB. If no private STB ID exists for the STB, one may be given to the user so it can be inserted at startup. </span><br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">At step 45, a subscriber purchases the STB. At this time, as shown in step 46, the operator configures the subscriber account with the public STB ID (for example MAC address) to be used for entitlement and the private STB ID. The private STB ID is not transmitted over the network, and as noted above, is not visible in the STB. The local object code 22 then sends an HTTPS PUT message to the MIEP 42, which sends an HTTP PUT message to the IPTV Bootstrap Server 43. In response, the server looks up the subscriber association at step 49 using the public STB ID. The server checks the Hash Message Authentication Code (HMAC) and if confirmed, prepares initial information for the STB. The initial information may include a Group Subscription User ID, authentication information for the Group Subscription User ID, and User IDs for individual users in the group. The initial information is then returned to the local object code in a 200 (HTTP PUT) message 51 and a 200 (HTTPS PUT) message 52. At step 53, the local object code sets the Public Group Subscription User ID, Private Group Subscription User ID and password, and individual User IDs. These data are used later for SIP/HTTP communication and authentication. </span><br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">FIG. 5 is a signaling diagram illustrating the steps of a process in which the Group Subscription User ID is registered with IMS (steps 56-58), the local object code learns the address of an IPTV Application Program (IAP) (steps 59-63), and IPTV is initiated (steps 64-68). The process of FIG. 5 is performed after the STB is provisioned with a Public Group Subscription User ID, Private User ID, and password. This may be done in a number of different ways. For example, it may be done manually, remotely by an operator, or with a bootstrapping process such as that shown in FIG. 4. With reference to FIG. 5 and FIG. 3, the process will now be described. </span><br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">The User Handling Function module 27 in the local object code 22 sends a REGISTER message 56 to the CSCF, which registers the Group Subscription User ID at 57 and returns a 200 (REGISTER) message 58 to the local object code. This registers the Group Subscription User ID with IMS. The IPTV Service Discovery Function module 28 in the local object code then sends a SUBSCRIBE message 59 to the CSCF, which forwards the SUBSCRIBE message to the IAP 43. The IAP returns a 200 (SUBSCRIBE) message 61 to the CSCF, which forwards the 200 (SUBSCRIBE) message to the local object code. </span><br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">The IAP 43 then sends a NOTIFY message 62 to the CSCF 55 and includes a portal URL for the IPTV Portal (i.e., the IAP 43). The CSCF forwards the NOTIFY message to the local object code 22, which returns a 200 (NOTIFY) message 63. At this point, the local object code has learned the address of the IAP. </span><br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">At 64, the IPTV Bootstrap Function module 32 in the local object code then initiates the STB browser 21. The STB browser sends a HTTPS GET message 65 to the MIEP 42, which forwards a HTTP GET message 66 to the IAP. The IAP returns a 200 (HTTP GET) message 67 to the MIEP, which forwards a 200 (HTTS GET) message 68 to the STB browser. At this point, the Group Subscription User ID is registered with IMS, the local object code 22 has portal and channel information for the IAP 43, and IPTV is initiated. </span><br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">FIG. 6 is a signaling diagram illustrating the steps of a process of changing users according to the teachings of the present invention. The signaling goes through the same nodes as FIG. 5, with the addition of an Access Point (AP) 69. As an initial condition, a default user is logged on. Note that the default user (subscription account) is always logged on. With reference to FIG. 6 and FIG. 2, the process will now be described. </span><br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">When a new user indicates a change of user to the STB browser 21, the STB browser sends a logonUser message 71 to the local object code 22 with the User ID of the new user, for example User 2. The User Handling Function module 27 in the local object code then sends a REGISTER message 72 to the CSCF 55 requesting logon (IMS registration) of the new user. The CSCF returns a 200 (REGISTER) message 73 to the local object code. </span><br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">At 74, the STB browser 21 indicates to the local object code 22 that User 2 desires to play Content on Demand (CoD). The IPTV Media Control Function module 29 in the local object code sends an INVITE message 75 to the CSCF 55. At this point in time, the STB has two active registrations: one for the Group Subscription user, and one for the current active user (User 2). Note that no SUBSCRIBE message is required to begin the session. At 76, the CSCF establishes the Linear TV/Content on Demand (LTV/CoD) session using the active user IMPU, IMPI, and password. The CSCF forwards the INVITE message 75 to the IAP 43 to establish the session. The IAP returns a 200 (INVITE) message 77 to the CSCF, which forwards the 200 (INVITE) message to the local object code. </span><br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">At a later time, another change of user is indicated. The STB browser 21 sends a logoffUser message 78 to the local object code 22 indicating that User 2 has logged off. The User Handling Function module 27 in the local object code sends another REGISTER message 79 to the CSCF requesting logoff (IMS deregistration) of the old user. The CSCF returns a 200 (REGISTER) message to the local object code and the old user is deregistered. The new user is then logged on using the process shown in steps 71-73. Note that the browser is not reset. There is a seamless change of users handled by the local object code. </span><br />
<br />
<span style="background-color: white; text-align: -webkit-auto;">As will be recognized by those skilled in the art, the innovative concepts described in the present application can be modified and varied over a wide range of applications. Accordingly, the scope of patented subject matter should not be limited to any of the specific exemplary teachings discussed above, but is instead defined by the following claims. </span><br />
<br />
<br />
<center><b>* * * * *</b></center><br />
<hr style="text-align: -webkit-auto;" />
<br />
<center><br /></center><center><br /></center><center><iframe bordercolor="#000000" frameborder="0" height="640" hspace="0" marginheight="0" marginwidth="0" scrolling="no" src="http://ad.doubleclick.net/adi/N7433.148119.BLOGGEREN/B6627866.199;sz=640x640;ord=[timestamp]?;lid=41000000026530730;pid=60533;usg=AFHzDLtImsEYSFM7UKIP8Pevv5U2ZUULCA;adurl=http%253A%252F%252Fwww.abt.com%252Fproduct%252F60533%252FLG-55LM8600.html;pubid=548750;imgsrc=http%3A%2F%2Fcontent.abt.com%2Fmedia%2Fimages%2Fproducts%2FBDP_Images%2Fbig_55LM8600.jpg;width=640;height=430" vspace="0" width="640"></iframe></center></div>
Kevin Andrew Woolseyhttp://www.blogger.com/profile/01268449682429697653noreply@blogger.com0tag:blogger.com,1999:blog-3776716555337472667.post-14678989109583113772012-06-03T20:37:00.004-07:002012-06-03T20:41:25.815-07:00How do I fine US patents online?<div dir="ltr" style="text-align: left;" trbidi="on">
<span style="font-size: x-large;">RIGHT HERE is a good place to start reading and discovering US Patents online. Explore patents online and find out what the brightest minds and companies are thinking right now! Very exciting stuff!</span><br />
<span style="font-size: x-large;"><br /></span><br />
<span style="font-size: x-large;"><iframe bordercolor="#000000" frameborder="0" height="640" hspace="0" marginheight="0" marginwidth="0" scrolling="no" src="http://ad.doubleclick.net/adi/N7433.148119.BLOGGEREN/B6627866.193;sz=640x640;ord=[timestamp]?;lid=41000000026530730;pid=53463;usg=AFHzDLvz9CP_RUnAzqrVdc8CFhC8XZScMw;adurl=http%253A%252F%252Fwww.abt.com%252Fproduct%252F53463%252FNikon-25478.html;pubid=548750;imgsrc=http%3A%2F%2Fcontent.abt.com%2Fmedia%2Fimages%2Fproducts%2FBDP_Images%2Fbig_D5100KIT.jpg;width=640;height=640" vspace="0" width="640"></iframe></span><br />
<br />
<br /></div>Kevin Andrew Woolseyhttp://www.blogger.com/profile/01268449682429697653noreply@blogger.com0