December 18, 2007
a) Suppose 2 calculators in a lot are defective. Outline two ways of calculating the probability that the lot will be rejected. Calculate this probability
Do I use the probability distribution here? This question is a hypergeometric distribution
b) The quantity control department wants to have at least 30% chance of reject lots that contains only one defective calculator. Is testing 3 calculators in a lot of 12 sufficient? If not how would you suggest they alter their quality control techniques to achieve this standard? Support your answer with mathematical calculations
a = 2 b = 10 n = 12 r = 3
December 11, 2007
$ Sending output to nohup.out
ctrl + d
$ There are running jobs.
ctrl + d
$ ps -ef| grep user6
user6 7026 1 0 15:52:03 ? 0:00 sleep 400 <== Running!!
user6 7049 7029 14 15:53:00 pts/tg 0:00 ps -ef
user6 6769 6768 0 14:12:12 pts/tc 0:00 -sh
user6 7050 7029 3 15:53:00 pts/tg 0:00 grep user6
user6 7029 7028 0 15:52:38 pts/tg 0:00 -sh
December 2, 2007
November 28, 2007
November 26, 2007
While there was not much time for fun, my hosts at NHN (Paul Sung and Ed Yoon) picked me up at the airport and took me out to a meal at a traditional Korean restaurant......
I had a good time. :)
November 24, 2007
The bill was to go to President Roh Moo-hyun for final approval. His office has said he may veto it because state prosecutors have already launched a probe into the scandal at the country's largest industrial group, which includes Samsung Electronics Co.
The single-chamber legislature, however, can override a veto if a majority of its 299 members attend a floor vote and two-thirds of them vote in favor.
A total of 155 lawmakers voted for the bill Friday, 17 cast ballots against it and 17 abstained. A total of 110 lawmakers were absent and did not vote.
The legislation calls for Roh to name an independent counsel to delve into allegations against Samsung, including that it operated slush funds to bribe influential figures such as prosecutors, judges and government officials.
Other accusations include claims Samsung manipulated evidence and witnesses in a court case over a purported deal that critics say was aimed at transferring corporate control of Samsung from the group's chairman, Lee Kun-hee, to his only son.
The lawmakers have cast doubt on whether state prosecutors could effectively carry out a probe given that some were among those accused of accepting bribes, saying in the bill that a probe by those investigators "cannot earn the people's confidence."
The allegations cited by the legislation are based on the claims of a former top Samsung legal affairs official, who this month went public to reveal the alleged wrongdoing.
Kim Yong-chul, himself a former prosecutor, said he was responsible for bribing those in the legal field and claimed that Lim Chai-jin — the nation's new top prosecutor — was among those who took payments. Lim has denied the allegation.
Two civic groups subsequently filed a criminal lawsuit against Samsung, prompting state prosecutors to open a probe.
On Thursday, Samsung, which has vociferously denied the allegations, expressed regret over the lawmakers' impending action, but said it would cooperate with an independent probe. On Friday, the business group said it stood by that comment.
The bill's passage — the seventh time a special prosecutor has been approved by the National Assembly — came after lawmakers reached a deal to combine two separate proposals into a single bill.
A coalition of liberal lawmakers, many aligned with Roh, agreed to a proposal by conservatives to also investigate their claims that Roh received Samsung money before and after the 2002 election.
The legislation does not cite Roh by name but states that those in "the highest political echelon" allegedly received illicit funds from Samsung during and after the 2002 presidential race.
"We have already said we can consider the veto rights and that is still effective," Cheon Ho-seon, Roh's spokesman, told reporters. But he added a final decision would be made after receiving the bill.
The legislation calls for Roh to appoint an independent counsel out of three candidates recommended by the Korean Bar Association. The special prosecutor, aided by 33 assistant investigators, can investigate for up to 105 days.
Huge South Korean industrial groups such as Samsung are not new to scandals. The conglomerates have regularly been accused of wielding influence as well as dubious dealings between subsidiaries to help controlling families evade taxes and transfer wealth to heirs.
November 17, 2007
November 3, 2007
Google's shares traded over the $700 mark this week, marking a new first for the Internet giant. Just a little more than three weeks ago, Google shares passed the $600 mark and analysts were speculating its shares could climb as high as $700 within the next year. Apparently, it's been a quick year.
The stock was up following reports that Google is in "serious discussions" with Verizon Wireless to put its mobile "GPhone" software on Verizon phones. For months, people have been speculating about the GPhone.
Most people believe that it's not a specific phone, but is more likely an operating system or software that integrates many of Google's mobile services, such as Web search, Gmail, YouTube, and Google Maps, onto phones made by existing handset makers. But more than simply integrating Google services onto handsets, the new Google mobile operating system is believed to be an open platform on which application developers would have free reign to develop a slew of new applications and services.
But, as CNET News.com's Marguerite Reardon points out, Google-powered phones will be useless unless the company can strike deals with mobile operators to allow them on their networks. T-Mobile USA is rumored to be the first U.S. operator that will sign on with Google.
CNET News.com readers expressed concern that Google's mobile applications would be limited to one or two handsets offered by a single carrier.
"Great! Another new phone designed to screw over American consumers by locking it down to just one cell phone provider," one reader wrote to the News.com TalkBack forum. "Is Google really that insensitive to the market and to consumers?"
In another move that was anticipated for weeks, Google has unveiled a set of application program interfaces (APIs) that allow third-party programmers to build widgets that take advantage of personal data and profile connections on a social-networking site. But instead of limiting the project to its own social-networking property, Orkut, Google has invited other sites along for the ride--including LinkedIn, Hi5, Plaxo, Ning, and Friendster.
Google's version of this "write once, run anywhere" concept is called OpenSocial, a set of common APIs that will enable developers to create applications for social networks, blogs, and any Web sites that accept the OpenSocial code. Currently, developers have to write new programs for each site, even if the functionality will be the same on each site.
This announcement illustrates how Google is courting developers and possibly attempting to outdo Facebook in openness. Facebook opened up its platform to developers in June and the site was immediately flooded with all sorts of useful and not-so-useful apps. Google, Yahoo, and others have been heavily espousing the beauty of open platforms and making moves to that end.
Leopard on the loose
Some 30 months after Apple released Tiger, it released the Leopard operating system into the wild--a little later than originally planned due to the company's work on iPhone. And while it wasn't exactly iPhone Day, several hundred Mac fans lined up for the launch in the pouring rain outside the Apple Store on Fifth Avenue in Manhattan.
The line for Leopard appeared to be divided fairly evenly between rabid Apple fans and shoppers who'd figured they could stop by and pick it up quickly--and indeed, come launch time, the line moved fast as customers were ushered into a gauntlet of Apple Store employees (much like the iPhone launch in June) and directed straight to the cash registers when the doors opened at 6 p.m. (The scene was repeated in San Francisco, where hundreds of people lined up on Stockton Street to get their hands on the new OS.)
However, the installation process didn't always go as smoothly. Apple posted a support document over the weekend on its Web site addressing reports of interminable "blue screen" problems that caused some Mac users upgrading to Mac OS X Leopard no small degree of frustration.
Some attempts to upgrade to Leopard were stymied after the installation process was almost complete and users attempted to restart their machines. A long thread on Apple's discussion forums outlined the problems, in which their Macs would get hung up on the initial boot screen. That screen happens to be blue, inviting comparisons to the infamous Windows "blue screen of death" encountered when Windows crashes.
There are dozens of important new features in Leopard, perhaps most notably the Time Machine application that could make it easier for users to back up and restore their files. Backing up your files is generally a simple exercise with a external hard drive, but Time Machine is interesting because of the friendly way in which it lets you restore files, flying back in time (and space) to the last instance in which that file was saved.
October 27, 2007
October 13, 2007
- No plans to open source.
- The implemented basic relational operators do not allow for ad-hoc analysis and bulk processing. (use pig, hadoop instead)
- They have a SQL-like language but it’s very basic. (no support for joins, aggregation, etc.)
- It has active participation of yahoo infrastructure team.
October 7, 2007
AN AMERICAN researcher has claimed he is just weeks away from realising a science-fiction dream: the creation of artificial life.
Craig Venter, a controversial and flamboyant DNA scientist, said he is about to produce a synthetic living cell that is capable of reproducing itself.
If Venter delivers on his bold promise it will rank as one of the greatest scientific breakthroughs of recent years. It could open the door to a new generation of artificial life forms designed to tackle everything from disease in humans to environmental crises.
But while the Maryland-based scientist has caused excitement in scientific quarters, he has also prompted a renewed ethical debate on the acceptable limits of research into the building blocks of life. As well as concern over "playing god", some experts fear the creation of a new species could have safety implications.
Chromosomes are at the centre of Venter's breakthrough. In the simplest forms of life, every cell has a chromosome, which is a long string of DNA that "tells" the cell what kind it is, what to do and when. He has used laboratory chemicals to create an artificial chromosome, based on a "stripped-down" version of a bacterium.
The next step involves inserting the artificial chromosome into a natural cell from a bacterium. Venter said the artificial chromosome will take over its host cell, effectively becoming a new artificial form of life. Crucially, it will have the ability to reproduce itself.
Venter believes the technique will work because his team has already successfully transplanted chromosomes from one bacteria cell to another. If the technique works as expected, the next step will be to genetically alter the genetic make-up of the synthetic chromosome to deal with specific real-work tasks. For example, it is theoretically possible to make an artificial life form to consume greenhouse gases.
Venter, a Vietnam veteran and a yachtsman, has provoked controversy in the past because of his flamboyant style and his commercial approach to science. In the 1990s, he turned the human genome project into a competition by effectively racing publicly funded scientists to complete the map of the human gene.
He said: "This will be a very important philosophical step in the history of our species. We are going from reading our genetic code to the ability to write it. That gives us the hypothetical ability to do things never contemplated before."
Venter added he had carried out an ethical review before completing the experiment. He said: "We feel that this is good science. We are not afraid to take on things that are important just because they stimulate thinking. We are dealing in big ideas. We are trying to create a new value system for life. When dealing at this scale, you can't expect everybody to be happy."
Grahame Bulfield, vice-principal of Edinburgh University and professor of genetics, said: "This is a technical tour de force rather than an intellectual breakthrough. But it opens up molecular genetics to a huge range of new possibilities and applications, and should give much more control over how it is done."
James Milner-White, professor of structural bio-informatics at Glasgow University, said: "It's potentially very exciting. I would want to know more about what is happening in the experiments and whether the life forms they create are viable. I note that they haven't mentioned that yet. If the life forms are viable, then it could be very significant."
Dr Mark Bailey, a lecturer in genetics at Glasgow University, said:
"If this work does produce viable bacteria, the next step will be to add genes to them to get them to do what you want them to do. Adding the genes is actually quite straightforward, but getting them to do what you want in the way you want is very challenging. That will take some years of work."
But the news has provoked concern among campaigners who want restraints on the research being pioneered by genetic scientists.
Pat Mooney, director of Canadian bioethics organisation ETC group, said: "Governments, and society in general, are way behind the ball. This is a wake-up call: what does it mean to create new life forms in a test tube?"
He said Venter was creating a "chassis on which you could build almost anything. It could be a contribution to humanity such as new drugs or a huge threat to humanity such as bio-weapons."
October 6, 2007
The fixes to Java Runtime Environment (JRE) 1.3.1, 1.4.2, 5.0 and 6.0 plug holes that attackers could use to bypass security restrictions, manipulate data, disclose sensitive information or compromise an unpatched machine. Among the JRE bugs, Sun said in several security advisories, are two that allow attack code from malicious sites to make network connections on machines other than the victimized computer. One possible result, according to a paper by several Stanford University researchers that was cited by Sun: circumvented firewalls.
Other vulnerabilities in JRE and Java Web Start, a framework that lets Java-based applications launch directly from a browser, could be used by attackers to read local files, overwrite local files and hide Java-generated warnings.
Although Sun does not assign threat scores or label its advisories with terms such as "critical" or "low," Danish bug tracking vendor Secunia collectively tagged the five advisories and their 11 patches as "highly critical," its second-highest ranking.
Some of the vulnerabilities are limited to specific JRE versions, but pulling action items from the advisories is difficult since Sun does not use an easy-to-understand grid as does Microsoft, for instance, to indicate affected software. Neither JRE nor Web Start includes an automatic update mechanism; users must manually download and apply the updated versions Sun has posted on its Web sitehere.
Mention of Mac OS X was, as usual, absent in the security advisories. Sun does not post updated editions of JRE and other Java components for the Mac operating system. Instead, Apple Inc.'s implementation of Java requires that the company provide Java fixes as part of its own security updates. That's been a sticking point with some Mac users, who have expressed concern that Apple has not updated its Java code since February.
October 4, 2007
subject : Open Source Program and Software
6:30~7:00: dinner, reception.
Open Source Programs Manager, Google, Inc.
Zaheda Bhorat is Open Source Programs Manager at Google, Inc., working on projects to promote the spread of open source software both inside and outside Google. She has been responsible for programs like the Google Summer of Code, Google-O'Reilly Open Source Awards and is driving Google's support of open standards such as Open Document Format (ODF).
She has more than 15 years of experience in technology and software with expertise in open source software, web 2.0, and community building. Before joining Google, Zaheda was responsible for the open source community at OpenOffice.org while at Sun Microsystems. She built the first open source marketing community with volunteers to support the office application, and the first native language community which now boasts 100 languages. Prior this Zaheda was responsible for the (online) Apple Store and building online communities at Apple Computer Inc. while managing the Apple Online Service Division in Europe.
An internationally-known advocate for open source software, Zaheda speaks regularly to educate on open source topics, open standards, particularly in developing countries. She has an engineering degree and would like to encourage open source principles and methods to spread to areas outside of software.
September 28, 2007
2. A native-API partly Java technology-enabled driver converts JDBC calls into calls on the client API for Oracle, Sybase, Informix, DB2, or other DBMS. Note that, like the bridge driver, this style of driver requires that some binary code be loaded on each client machine.
3. A net-protocol fully Java technology-enabled driver translates JDBC API calls into a DBMS-independent net protocol which is then translated to a DBMS protocol by a server. This net server middleware is able to connect all of its Java technology-based clients to many different databases. The specific protocol used depends on the vendor. In general, this is the most flexible JDBC API alternative. It is likely that all vendors of this solution will provide products suitable for Intranet use. In order for these products to also support Internet access they must handle the additional requirements for security, access through firewalls, etc., that the Web imposes. Several vendors are adding JDBC technology-based drivers to their existing database middleware products.
4. A native-protocol fully Java technology-enabled driver converts JDBC technology calls into the network protocol used by DBMSs directly. This allows a direct call from the client machine to the DBMS server and is a practical solution for Intranet access. Since many of these protocols are proprietary the database vendors themselves will be the primary source for this style of driver. Several database vendors have these in progress.
September 26, 2007
September 21, 2007
- I'll go under the water for a while. Bye Bye~
September 19, 2007
|Working Stiff||Crime Syndicate|
|General Manager||Under Boss|
|* Assistant Manager||* Soldier|
|Team Leader||Button Man|
Not only does my job ROCK, but I will! Woo!
Ps. Thanks, joo. My konglish was fixed. :)
September 18, 2007
i'll register. :)
September 17, 2007
September 14, 2007
I wanna go global!
But, The Far Country makes me thinkful that it was troublesome for me to invest the time. also, my open source project has just began.
Now, i started falling in love with open source.
Mind conflict. -0-
September 13, 2007
September 12, 2007
|Hbase > SELECT 'studioName:YoungGu Art' |
--> FROM movieLog_table
--> WHERE row = 'D-War';
D-War (also known as Dragon Wars) is a 2007 South Korean film directed by Shim Hyung-rae. It is a fantasy-action film that is reportedly the biggest budgeted South Korean film of all-time. -- wikipedia.
But, ... i love google's massive computing engine.
and i wanna know their secrets.
|Related News : Can Google Be Beat? They Already Have Been in South Korea.|
September 11, 2007
September 10, 2007
$ ssh-keygen -t dsa Generating public/private dsa key pair. Enter file in which to save the key (/home/udanax/.ssh/id_dsa): Created directory '/home/udanax/.ssh'. Enter passphrase (empty for no passphrase):
Enter same passphrase again: Your identification has been saved in /home/udanax/.ssh/id_dsa. Your public key has been saved in /home/udanax/.ssh/id_dsa.pub. The key fingerprint is: blah~ blah~ $ _
$ cat ~/.ssh/id_dsa.pub | ssh id@host "cat >> .ssh/authorized_keys" password: enter the password
September 9, 2007
P/E is the ratio of a company’s share price to its per-share earnings.
A P/E ratio of 10 means that the company has 1 of annual, per-share earnings for every 10 in share price. (Earnings by definition are after all taxes etc.)
A company’s P/E ratio is computed by dividing the current market price of one share of a company’s stock by that company’s per-share earnings. A company’s per-share earnings are simply the company’s after-tax profit divided by number of outstanding shares. A company that earned 5M last year, with a million shares outstanding, had earnings per share of 5. If that company’s stock currently sells for 50/share, it has a P/E of 10. At this price, investors are willing to pay 10 for every 1 of last year’s earnings.
P/Es are traditionally computed with trailing earnings (earnings from the past 12 months, called a trailing P/E) but are sometimes computed with leading earnings (earnings projected for the upcoming 12-month period, called a leading P/E).
For the most part, a high P/E means high projected earnings in the future. But actually the P/E ratio doesn’t tell a whole lot, but it’s useful to compare the P/E ratios of other companies in the same industry, or to the market in general, or against the company’s own historical P/E ratios.
Some analysts will exclude one-time gains or losses from a quarterly earnings report when computing this figure, others will include it. Adding to the confusion is the possibility of a late earnings report from a company; computation of a trailing P/E based on incomplete data is rather tricky. (It’s misleading, but that doesn’t stop the brokerage houses from reporting something.) Even worse, some methods use so-called negative earnings (i.e., losses) to compute a negative P/E, while other methods define the P/E of a loss-making company to be zero. Worst of all, it’s usually next to impossible to discover the method used to generate a particular P/E figure, chart, or report.
Like other indicators, P/E is best viewed over time, looking for a trend. A company with a steadily increasing P/E is being viewed by the investors as becoming more speculative. And of course a company’s P/E ratio changes every day as the stock price fluctuates.
The P/E ratio is commonly used as a tool for determining the value of a stock. A lot can be said about this little number, but in short, companies expected to grow and have higher earnings in the future should have a higher P/E than companies in decline.
For example, if a company has a lot of products in the pipeline, I wouldn’t mind paying a large multiple of its current earnings to buy the stock. It will have a large P/E. I am expecting it to grow quickly. A rule of thumb is that a company’s P/E ratio should be approximately equal to that company’s growth rate.
PE is a much better comparison of the value of a stock than the price. A 10 stock with a PE of 40 is much more “expensive” than a 100 stock with a PE of 6. You are paying more for the 10 stock’s future earnings stream. The 10 stock is probably a small company with an exciting product with few competitors. The 100 stock is probably pretty staid - maybe a buggy whip manufacturer.
It’s difficult to say whether a particular P/E is high or low, but there are a number of factors you should consider!
First: It’s useful to look at the forward and historical earnings growth rate. (If a company has been growing at 10% per year over the past five years but has a P/E ratio of 75, then conventional wisdom would say that the shares are expensive.)
Second: It’s important to consider the P/E ratio for the industry sector. (Food products companies will probably have very different P/E ratios than high-tech ones.)
Finally: A stock could have a high trailing-year P/E ratio, but if the earnings rise, at the end of the year it will have a low P/E after the new earnings report is released.
Thus a stock with a low P/E ratio can accurately be said to be cheap only if the future-earnings P/E is low.
If the trailing P/E is low, investors may be running from the stock and driving its price down, which only makes the stock look cheap.
September 1, 2007
WILLS POINT, Texas (AP) — Entomologists are debating the origin and rarity of a sprawling spider web that blankets several trees, shrubs and the ground along a 200-yard stretch of trail in a North Texas park.
Officials at Lake Tawakoni State Park say the massive mosquito trap is a big attraction for some visitors, while others won't go anywhere near it.
"At first, it was so white it looked like fairyland," said Donna Garde, superintendent of the park about 45 miles east of Dallas. "Now it's filled with so many mosquitoes that it's turned a little brown. There are times you can literally hear the screech of millions of mosquitoes caught in those webs."
Spider experts say the web may have been constructed by social cobweb spiders, which work together, or could be the result of a mass dispersal in which the arachnids spin webs to spread out from one another.
"I've been hearing from entomologists from Ohio, Kansas, British Columbia — all over the place," said Mike Quinn, an invertebrate biologist with the Texas Parks and Wildlife Department who first posted photos online.
Herbert A. "Joe" Pase, a Texas Forest Service entomologist, said the massive web is very unusual.
"From what I'm hearing it could be a once-in-a-lifetime event," he said.
But John Jackman, a professor and extension entomologist for Texas A&M University, said he hears reports of similar webs every couple of years.
"There are a lot of folks that don't realize spiders do that," said Jackman, author of "A Field Guide to the Spiders and Scorpions of Texas."
"Until we get some samples sent to us, we really won't know what species of spider we're talking about," Jackman said.
Garde invited the entomologists out to the park to get a firsthand look at the giant web.
"Somebody needs to come out that's an expert. I would love to see some entomology intern come out and study this," she said.
Park rangers said they expect the web to last until fall, when the spiders will start dying off.
August 31, 2007
But, I don't know why visit us. -_-a
Girls' Generation, also known as SNSD, the acronym of So Nyeo Shi Dae, is a large nine member girl group formed in 2007 by SM Entertainment. The group debuted on SBS Inkigayo on August 5, 2007, performing their first single, "Into the New World". The members are (in order of official announcement) YoonA, Tiffany, YuRi, HyoYeon, SooYoung, SeoHyun, TaeYeon (the leader), Jessica, and Sunny. They are said to be a multilingual group. Aside Korean, they are said to know English, Chinese, and Japanese. Both Jessica and Tiffany have been raised in America, HyoYeon traveled to Beijing in 2004, and SooYoung started out in the Japanese entertainment business first in the girl group Route-O in 2004.
August 30, 2007
One of the reasons why Google is such an effective search engine is the PageRank™ algorithm, developed by Google's founders, Larry Page and Sergey Brin, when they were graduate students at Stanford University. PageRank is determined entirely by the link structure of the Web. It is recomputed about once a month and does not involve any of the actual content of Web pages or of any individual query. Then, for any particular query, Google finds the pages on the Web that match that query and lists those pages in the order of their PageRank.
Imagine surfing the Web, going from page to page by randomly choosing an outgoing link from one page to get to the next. This can lead to dead ends at pages with no outgoing links, or cycles around cliques of interconnected pages. So, a certain fraction of the time, simply choose a random page from anywhere on the Web. This theoretical random walk of the Web is a Markov chain or Markov process. The limiting probability that a dedicated random surfer visits any
particular page is its PageRank. A page has high rank if it has links to and from other pages with high rank.
Let W be the set of Web pages that can reached by following a chain of hyperlinks starting from a page at Google and let n be the number of pages in W. The set W actually varies with time, but in May 2002, n was about 2.7 billion. Let G be the n-by-n connectivity matrix of W, that is,
gi,j is 1 if there is a hyperlink from page i to page j and 0 otherwise. The matrix G is huge, but very sparse; its number of nonzeros is the total number of hyperlinks in the pages in W.
Let cj and ri be the column and row sums of G.
cj = i gi,j, ri= j gi,j
The quantities ck and rk are the indegree and outdegree of the k-th page. Let p be the fraction of time that the random walk follows a link. Google usually takes p = 0.85. Then 1-p is the fraction of time that an arbitrary page is chosen. Let A be the n-by-n matrix whose elements
ai,j = p gi,j / cj + , where = (1-p) / n.
The matrix A is not sparse, but it is a rank one modification of a sparse matrix. Most of the elements of A are equal to the small constant . When n = 2.7·109, = 5.5·10-11.
The matrix is the transition probability matrix of the Markov chain. Its elements are all strictly between zero and one and its column sums are all equal to one. An important result in matrix theory, the Perron-Frobenius Theorem, applies to such matrices. It tells us that the largest eigenvalue of A is equal to one and that the corresponding eigenvector, which satisfies the
x = Ax,
exists and is unique to within a scaling factor. When this scaling factor is chosen so that
ixi = 1
then x is the state vector of the Markov chain. The elements of x are Google's PageRank.
If the matrix were small enough to fit in MATLAB, one way to compute the eigenvector x would be to start with a good approximate solution, such as the PageRanks from the previous month, and simply repeat the assignment statement
x = Ax
until successive vectors agree to within specified tolerance. This is known as the power method and is about the only possible approach for very large n. I'm not sure how Google actually computes PageRank, but one step of the power method would require one pass over a database of Web pages, updating weighted reference counts generated by the hyperlinks between pages.
August 28, 2007
Tuesday August 28, 2007
Photograph: Getty Images
The internet company Yahoo! has become embroiled in a legal battle with a human rights group over a decision to disclose the identity of Chinese citizens, leading to their arrests.
Yahoo! is being sued by the World Organisation for Human Rights, based in Washington, on behalf of Wang Xiaoning and his wife, Yu Ling.
He is serving a 10-year prison sentence for advocating democratic reform in articles circulated on the internet.
The group is also suing Yahoo! on behalf of Shi Tao, a journalist serving a 10-year sentence for sending an email summarising a Chinese government communiqué on how reporters should handle the 15th anniversary of the 1989 crackdown on the pro-democracy movement.
The suit alleges that these people - and others yet to be identified - were tortured or subjected to inhumane treatment at the hands of the Chinese authorities because of information that Yahoo!, Yahoo! China or Alibaba.com, a Chinese company in which Yahoo! has a minority stake, had passed on to the government.
Shi's case has been taken up by the British human rights group Amnesty International. The group says he is kept under tight control, with family visits requiring special approval from the prison manager, and is not allowed to receive printed matter, including books or newspapers.
Last November, he was awarded the Golden Prize of Freedom by the World Association of Newspapers.
Amnesty has also criticised Yahoo! for providing information to the authorities that led to the arrests and, more generally, the involvement of the company in the practice of government censorship.
In a 40-page defence filed in Oakland, California yesterday, the internet firm argued that US courts were not the place for political grievances against the Chinese government.
"This is a political and diplomatic issue, not a legal one," Kelley Benander, a Yahoo! spokeswoman, told the Los Angeles Times. "The real issue here is the plaintiffs' outrage at the behaviour and laws of the Chinese government. The US court system is not the forum for addressing these political concerns."
Yahoo! does not dispute turning over information in response to Chinese government demands, but argues there was little connection between that information and the arrest, prosecution and conviction of the prisoners.
In its court filings, the company said it "deeply sympathises" with the plaintiffs and their families and does not condone the suppression of their liberties. However, it also argues that the company has no control over Chinese laws or their enforcement.
Human rights groups have criticised other internet companies over their dealings with China.
Google has come under fire for its decision to censor its search services on subjects such as the 1989 Tiananmen Square massacre in order to gain greater access to China's fast-growing market.
August 27, 2007
to the American Forest Resource Council
at their Annual Conference, April 18, 2006
Visit Grist for Mitch's full article on this important issue
Ladies and Gentleman,
As flattered as I am by Will’s kind words, I want the record to note that I attended today under assertions that this event was just some folks gathering for a round of golf. I’ve taken note of the nearest exits and am wearing running shoes.
Actually, it was with pleasure that I accepted the invitation to sit up here with Will and Russ, both of whom I think very highly of, to see if I could get a rise out of you all.
Let's acknowledge up front that there are a lot of battle scars in the room today. I have no intention of living down my past, even if you would allow me. I was among the very first tree-sitters and organized the first spotted owl protests. My organization has long been among the list of usual suspects on appeals and litigation. The fact is that most of us feel that there are things that warrant the waging of war. For folks like me, old growth, roadless areas, and a future for wildlife like lynx and, yes, owls, make the list. But with age we realize that war is not its own reward, and we look for better ways to achieve our objectives.
I know your bottom line: In providing needed wood products and jobs, your companies have to be profitable in a financial climate rocked by regulation, globalization, and other factors.
On my side, the bottom line is preserving the systems and fabric of life across our region and planet in a climate clouded by greenhouse gases, the vast appetite of surging human populations, and other factors. I believe that our federal landscape should provide a sufficient network of reserves to sustain even the most demanding wildlife species and outside of those reserves should be a model of forest practices that are sustainable for stands, soils and stream life as well as for local rural communities. I further want to see solutions to the increasingly transient ownership and instability on our region's private timber lands, both large and small.
Our respective objectives can conflict but are not necessarily exclusive of one another. One way to look at this is that if you are among the companies that no longer have an economic stake in logging old growth or wildlands, or you would tend to agree that the time is past when that type of logging is socially acceptable, then collaboration is a way to expedite trends for which you are already positioned. Furthermore, when we resolve our differences collaboratively and to mutual benefit, it improves life for the people that share our communities. They too hunger for solutions that can sustain both economic prosperity and natural heritage.
Fortunately new mill technologies, new market opportunities and advances in silviculture give us more decision space within which to find common ground. That and five bucks will get you a latte grande.
In my limited experience, I've found that successful collaboration requires more than common ground. We must also accomplish that biggest of human challenges: Getting along.
Getting along doesn’t have to mean male bonding, arranged marriages between Bellingham and Forks, or even buying a Subaru and going vegan. It does mean exhibiting the leadership to engage and sustain relationships despite water bars in the road.
Challenges that I have observed through Conservation Northwest's experience, primarily on the Gifford Pinchot, Olympic and Colville National Forests, involve building trust, yielding turf, reaching agreement on field prescriptions, enshrining new protocols into boilerplate contracts, and more. Collaboration fills the timber pipeline more like a hand-operated water pump in a traditional campsite than like breaching the dams on the Lower Snake. It requires patience.
Leadership must be exhibited on all sides: timber, agency, and conservation. There are other "sides" too, such as contractors, tribes and community representatives. But the three key legs of this stool are Forest Service district and forest leadership, people like me, and people like you. Each will find pressure from both inside his organization and from among her peers to abandon collaboration and return to the trenches. We all know people who are most comfortable being something familiar rather than doing something unfamiliar.
My friend David Syre, of Trillium Corporation, and I have been catching flack just for working together to revitalize the waterfront of downtown Bellingham. Imagine how temperatures rise when old adversaries work together to cut down trees! But if you know your objectives, have the courage to explore new ways to achieve them, and have the patience to overcome obstacles, then you have a moral and business obligation to try collaboration irrespective of what your middle managers or neighbors might say.
If you have fortitude, your collaboration can withstand the wedges driven - often by public employees - in an effort to perpetuate Balkanized positions and preserve power where it doesn't belong.
The return on investment in collaboration can be very satisfying. The Gifford Pinchot National Forest's timber pipeline is no longer blocked up in litigation. The Survey and Manage injunction had slight effect on that Forest and nothing else is even under current appeal. On the GP, conservationists are now partners in seeking funding to plan and implement the new generation of timber and stewardship projects. We are also active partners in exploring efficient ways to fulfill the purposes of the National Environmental Policy Act, which already contains enough flexibility to fully inform decisions on potentially damaging federal projects while not wrapping benign or beneficial projects in red tape.
On the GP, it took a while to prime that hand pump, but now the pipeline is starting to fill up from thinning sales that are ecologically beneficial and socially acceptable. I want to recognize and give thanks to AFRC's own Bob Dick for his hard work and thoughtful leadership in the Pinchot Partnership and elsewhere. I've enjoyed an amicable relationship with Bob for twenty years, only partially because I know he hangs out with a tough crowd of Harley riders. It's no accident that Bob's efforts are yielding fruit, or more specifically, lots of little stumps.
Conservationists want to sustain this flow from the tens of thousands of acres of plantation stands on which habitat can be improved by thinning over the next few decades.
Our experience on the Colville is even more gratifying, if only because of the greater resource and political challenges of that landscape. I'm confident that the eventual outcomes from our efforts there will jointly increase timber predictability, wilderness protection, habitat restoration, community safety, and even political harmony.
Advancing a positive and common vision has its amusing moments. Picture Russ and me sitting in the office of Representative Cathy McMorris, with me lobbying for increased logging of small diameter trees and Russ pitching Wilderness protection. In fact, I sense an opportunity here to make the front page of the Oregonian if you'll all join me in reciting the word "wilderness" three times.
Russ already pointed out some of the steps that have led to the progress we have experienced. A few additional lessons from the Colville include:
1. Walk before you run. Don’t be too ambitious in early steps.
2. Prioritize the work to real and agreed-upon urgent needs, such as community safety, rather than those upon which positions are most likely to differ.
3. I reiterate Russ’ advice to focus on interests, not positions. This conceptual tool was provided by an outside consultant. On both the Colville and GP we found that outside consultants and facilitators were pivotal at key points.
4. Relationships are everything, and they are built by listening and by solving problems together.
5. Technical tools, maps, and jargon do not advance relationships. They are peripheral, not central, to good collaboration.
6. Trust is built by the time-tested means of people honoring their word.
7. Sustainability is found in the common interests of business and conservation, not in the competition for who will be the last one standing.
It seems likely that collaboration will work better in some places than others. Perhaps the easiest experiences will be where mill capacity least exceeds available volume. Yet there are enough positive examples across the West, from Oregon's Fremont to New Mexico's Gila National Forest, to prove powerful potential. Why not shoot for the moon in testing this model? With leadership and effort, perhaps we can avoid the next Biscuit-like showdown in southwestern Oregon.
The past is behind us. Comparing scars is way more fun than comparing wounds. So in this world of problems, let's see how many we can solve through collaboration.
Thanks for your attention.
August 25, 2007
The hole is nearly a billion light-years across. It is not a black hole, which is a small sphere of densely packed matter. Rather, this one is mostly devoid of stars, gas and other normal matter, and it's also strangely empty of the mysterious "dark matter" that permeates the cosmos. Other space voids have been found before, but nothing on this scale.
Astronomers don't know why the hole is there.
"Not only has no one ever found a void this big, but we never even expected to find one this size," said researcher Lawrence Rudnick of the University of Minnesota.
Rudnick's colleague Liliya R. Williams also had not anticipated this finding.
"What we've found is not normal, based on either observational studies or on computer simulations of the large-scale evolution of the universe," said Williams, also of the University of Minnesota.
The finding will be detailed in the Astrophysical Journal.
The universe is populated with visible stars, gas and dust, but most of the matter in the universe is invisible. Scientists know something is there, because they can measure the gravitational effects of the so-called dark matter. Voids exist, but they are typically relatively small.
The gargantuan hole was found by examining observations made using the Very Large Array (VLA) radio telescope, funded by the National Science Foundation.
There is a "remarkable drop in the number of galaxies" in a region of sky in the constellation Eridanus, Rudnick said.
The region had been previously been dubbed the "WMAP Cold Spot," because it stood out in a map of the Cosmic Microwave Background (CMB) radiation made by NASA's Wilkinson Microwave Anisotopy Probe (WMAP) satellite. The CMB is an imprint of radiation left from the Big Bang, the theoretical beginning of the universe.
"Although our surprising results need independent confirmation, the slightly colder temperature of the CMB in this region appears to be caused by a huge hole devoid of nearly all matter roughly 6 to 10 billion light-years from Earth," Rudnick said.
Photons of the CMB gain a small amount of energy when they pass through normal regions of space with matter, the researchers explained. But when the CMB passes through a void, the photons lose energy, making the CMB from that part of the sky appear cooler.
August 24, 2007
I heard he was moving to JBoss a division of Red Hat.
I feel enviable. -0-
Trustin Lee is a member of the Apache Software Foundation, a PMC (Project Management Committee) chair, committer, and the founder of the Apache MINA project, who is involved in various open source projects. He has been developing high-performance network applications including a massive SMS gateway, a lightweight ESB, and ApacheDS LDAP server in Java for more than 4 years. Please look around his blog or his résumé to find out more about him.
Under the accord with the Internet Society of China, an offshoot of the Information Industry Ministry, the companies are "encouraged" to register users under their real names, Reporters Without Borders said in a statement. The companies may be forced to censor content or identify bloggers, the Paris-based group said.
The agreement is detrimental to free speech because service providers would be forced to divulge bloggers' identities or be punished by the government, Reporters Without Borders said. The companies also are required to "delete illegal and bad information" from blogs, the group said.
"As they already did with website hosting services, the authorities have given themselves the means to identify those posting 'subversive' content by imposing a self-discipline pact," the group said.
The accord stopped short of banning anonymous blogging, a technique Chinese Internet users have used to criticize the government for fear of reprisal. China had 162 million users in June, second only to the U.S.
Microsoft said it wouldn't ask users to reveal their identities.
"The document makes some recommendations that Microsoft does not support," Adam Sohn, director of the company's online services group, said in a statement.
"We will not implement real-name registration for blogging in our Windows Live Spaces service."
Yahoo spokeswoman Linda Du referred questions to Alibaba.com Corp., which runs Yahoo's site in China. Porter Erisman, a spokesman for Alibaba.com, didn't immediately comment.
Other blog providers that agreed to the accord include Sohu.com Inc. and Qianlong Wang, Reporters Without Borders said.
Naver currently has a 77 percent share of all searches from within South Korea. Daum.net follows with 10.8 percent, Yahoo with just 4.4 percent and Google with a tiny 1.7 percent of Korean Web searches.
Why does Google fall short in South Korea? Wayne Lee, an analyst at Woori Investment and Securities, said "No matter how powerful Google's search engine may be, it doesn't have enough Korean-language data to trawl to satisfy South Korean customers."
Naver's founders realized that when searching in Korean, there was hardly anything to be found. So they set out to create the content and databases, so that when you would search in Korean, you would find quality content. Naver set up "Knowledge iN" in 2002, enabling Koreans to help each other in a type of real-time question-and-answer platform. On average, 44,000 questions are posted each day with about 110,000 returned answers.
The company is now the most profitable in South Korea and employs "27,000 workers, posted 299 billion won, or $325 million, in profit out of 573 billion won in sales last year. It has a market value of nearly 8 trillion won," says the New York Times article.
Google and Yahoo are making efforts to catch up in this market. Google, for example, recently announced an answers service for Russia that could also come to South Korea (see Google Launches "Question and Answers" In Russia). Google also recently has tried to jazz up its Korean home page (see Google's New 'Animated' Home Page In Korea).
The Hbase Shell aims to be to Hbase what the mysql client command-line tool is to mysqld, and what sqlplus to Oracle.
Hbase Shell was first added to TRUNK in July, 2007.
Starting today, anyone with a computer can view a close-up of about 100 million galaxies and 200 million stars।
To access Google Sky, available today, download the new Google Earth at http://earth.google.com.
"This is an application that allows you to see the sky at very, very high resolution, as if you were just flying through the universe and seeing and visiting galaxies," said Chikai Ohazama, a Google product manager who has worked to gather data from astronomical organizations around the world.
Google has stitched together real photographs of the universe into one giant database.
"Basically you're seeing imagery that you have to have a very, very high-powered telescope to look at and we're placing that in the database," Ohazama said. "You can zoom in very, very close and see the actual spiral, a galaxy and the clusters around it."
Google already allows users to see Earth at a level of detail many spy agencies would envy. The program's satellite and street-level imagery is so advanced it has generated alarm from privacy advocates.
One of the unique features of Google Sky is that you can plug in your address and the program shows you what the sky above your home looks like.
Google Sky allows users to bookmark constellations, rotate the whole sky and zoom in to see details of black holes and stars.
It is an awe-inspiring look at the universe, not to mention a whole new way to waste time at work.
Yahoo's involvement wasn't actually news either, because Yahoo! had hired Doug Cutting, the creator of hadoop, back in January. But Doug's talk at Oscon was kind of a coming out party for Hadoop, and Yahoo! wanted to make clear just how important they think the project is. In fact, I even had a call from David Filo to make sure I knew that the support is coming from the top.
Jeremy Zawodny's post about hadoop on the Yahoo! developer network does a great job of explaining why Yahoo! considers hadoop important:
For the last several years, every company involved in building large web-scale systems has faced some of the same fundamental challenges. While nearly everyone agrees that the "divide-and-conquer using lots of cheap hardware" approach to breaking down large problems is the only way to scale, doing so is not easy.
The underlying infrastructure has always been a challenge. You have to buy, power, install, and manage a lot of servers. Even if you use somebody else's commodity hardware, you still have to develop the software that'll do the divide-and-conquer work to keep them all busy.
It's hard work. And it needs to be commoditized, just like the hardware has been...
To build the necessary software infrastructure, we could have gone off to develop our own technology, treating it as a competitive advantage, and charged ahead. But we've taken a slightly different approach. Realizing that a growing number of companies and organizations are likely to need similar capabilities, we got behind the work of Doug Cutting (creator of the open source Nutch and Lucene projects) and asked him to join Yahoo to help deploy and continue working on the [then new] open source Hadoop project.
Let me unpack the two parts of this news: hadoop as an important open source project, and Yahoo!'s involvement. On the first front, I've been arguing for some time that free and open source developers need to pay more attention to Web 2.0. Web 2.0 software-as-a-service applications built on top of the LAMP stack now generate several orders of magnitude more revenue than any companies seeking to directly monetize open source. And most of the software used by those Web 2.0 companies above the commodity platform layer is proprietary. Not only that, Web 2.0 is siphoning developers and buzz away from open source.
But there are open source projects that are tackling important Web 2.0 problems "up the stack." Brad Fitzpatrick's LiveJournal scaling tools memcached, perlbal, and mogileFS come to mind, as well as OpenID. Hadoop is another critical piece of Web 2.0 infrastructure now being duplicated in open source. (I'm sure there are others, and we'd love to hear from you about them in the comments.)
OK -- but why is Yahoo!'s involvement so important? First, it indicates a kind of competitive tipping point in Web 2.0, where a large company that is a strong #2 in a space (search) realizes that open source is a great competitive weapon against their dominant competitor. It's very much the same reason why IBM got behind Eclipse, as a way of getting competitive advantage against Sun in the Java market. (If you thought they were doing it out of the goodness of their hearts rather than clear-sighted business logic, think again.) If Yahoo! is realizing that open source is an important part of their competitive strategy, you can be sure that other big Web 2.0 companies will follow. In particular, expect support of open source projects that implement software that Google treats as proprietary. (See the long discussion thread on my post about Microsoft's submission of their shared source licenses to OSI for my arguments as to why "being on the right side of history" will ultimately drive Microsoft to open source.)
Supporting Hadoop and other Apache projects not only gets Yahoo! deeply involved in open source software projects they can use, it helps give them renewed "geek cred." And of course, attracting great people is a huge part of success in the computer industry (and for that matter, any other.)
Second, and perhaps equally important, Yahoo! gives hadoop an opportunity to be tested out at scale. Some years ago, I was on the board of Doug's open source search engine effort, Nutch. Where the project foundered was in not having a large enough data set to really prove out the algorithms. Having more than a couple of hundred million pages in the index was too expensive for a non-profit open source project to manage. One of the important truths of Web 2.0 is that it ain't the personal computer era any more, Eben Moglen's arguments to the contrary notwithstanding. A lot of really important software can't even be exercised properly without very large networks of machines, very large data sets, and heavy performance demands. Yahoo! provides all of these. This means that Hadoop will work for the big boys, and not just for toy projects. And as Jeremy pointed out in his post (linked and quoted above), today's big boy may be everyday folks a few years from now, as the size and scale of Web 2.0 applications continue to increase.
BTW, in followup conversations with Doug, he pointed out that web search is not actually the killer app for hadoop, despite the fact that it is in part an implementation of the MapReduce technique made famous by Google. After all, Yahoo! has been doing web search for years without this kind of general purpose scaling platform. "Where Hadoop really shines," says Doug, "is in data exploration." Many problems, including tuning ad systems, personalization, learning what users need -- and for that matter, corporate or government data mining -- involve finding signal in a lot of noise. Doug pointed me to an interesting article on Amazon Web Services Developer Connection: Running Hadoop MapReduce on Amazon EC2 and Amazon S3. Doug said in email:
It provides an example of using Hadoop to mine one's [logfile] data.
Another trivial application for log data that's very valuable is reconstructing and analyzing user sessions. If you've got logs for months or years from hundreds of servers and you want to look at individual user sessions, e.g., how often do users visit, how long are their sessions, how do they move around the site, do often do they re-visit the same places, etc. This is a single MapReduce operation over all the logs, blasthing through, sorting and collating all your logs at the transfer rate of all the drives in your cluster. You don't have to re-structure your database to measure something new. It's really as easy as 'grep sort uniq'.
Also, here are the slides from my talk.