Is it true? Is Judge Koh really “tough on defendants?” We searched Docket Navigator and found that Judge Koh has ruled on 36 contested motions for summary judgment (not 17) and she granted 10 (not 3) for an overall success rate of 28%. Limiting the query to summary judgment motions that defendants are likely to file (noninfringement, invalidity and unenforceability), shows that Judge Koh ruled on 24 motions and granted 7. That’s a 29% success rate.
The truth is, when it comes to summary judgment, Judge Koh is no tougher on defendants than the national average.
How does Judge Koh compare with other judges? Nationwide, judges have ruled on 3,519 such motions since 2008, of which 1,003 were granted, for a success rate of 29%, the same as Judge Koh. The truth is, when it comes to summary judgment, Judge Koh is no tougher on defendants than the national average.
Fisher’s article also takes aim at Judge Sue Robinson. “Only 3 times has a claimant ever won on SJ in front of Judge Robinson . . . out of more than 1,000 cases heard.” According to Docket Navigator data, Judge Robinson has ruled on 59 motions for summary judgment of the type a claimant might file (infringement, no invalidity or no unenforceability) and she granted 15. That’s a 25% success rate. Nationally, all judges have ruled on 1,342 such motions, of which 359 were granted, for a success rate of 27%. The truth is, claimants are about as successful on summary judgment in Judge Robinson’s courtroom as the national average.
Litigation analytics can be a powerful tool, and increasingly a necessary one. Last week, Law360 reported a warning from a panel of legal department heads to law firms “slow to adapt to the analytics-driven future: Get on board with Big Data, or get left behind.” But unreliable analytics can do more harm than no analytics. Flawed data leads to flawed analytics which can lead to flawed decisions.Flawed data leads to flawed analytics which can lead to flawed decisions.
How do you know if data you are considering is reliable? Lawyers and judges have been asking similar questions for years in Daubert proceedings, and it all starts with some basic questions:
- How was the data collected?
- Who (or what algorithm) reviewed the data, and on what basis were codes and classifications assigned?
- Is the process transparent, or does it occur in a secret, proprietary black box?
- Is the underlying data available for independent analysis?
If the analytics you are relying on could not withstand vigorous cross-examination on these questions, should it really form the basis for any decision that impacts your clients or your business?
At Docket Navigator, we collect raw data from government sources. That data is then cleaned, coded, classified, and summarized by hand - in most cases by licensed U.S. attorneys. We rarely rely on automated processes and do so only where interpretation of the data is not required and the automated processes consistently yield highly accurate results. Even then, the data is reviewed for accuracy and normalized by hand. We never rely on natural language or text recognition algorithms to interpret data. While we do not claim to be free from human error, our software engineers have developed a series of checks and safety nets to identify gaps or inconsistencies in our data. Additionally, most Docket Navigator data is first published in the Docket Report and vetted by the 11,000+ patent professionals who subscribe to Docket Navigator. The underlying data is available to Docket Navigator subscribers for independent review and analysis via our publicly available database.
Docket Navigator recently released its 2014 Year in Review, a fresh look at how the patent litigation landscape changed in 2014. The report is free and available for download here.
No comments:
Post a Comment