Step 1 Anchor

About Quality Metrics

Patent Quality logoWe have had quality metrics since at least 1983. Committed to self-improvement, we continue to identify new metrics to gain a more thorough understanding of our work products and processes.  Learn more about our Quality Metrics evolution in the Quality Metrics Timeline.

Resulting from stakeholder feedback, in fiscal year 2015, we launched the Enhanced Patent Quality Initiative (EPQI) Quality Metrics Program.  In a March 2016 Federal Register Notice, we announced a new quality metrics approach, categorizing quality metrics as follows:

  • Product Indicators include metrics on the correctness and clarity of our work products.  We formulate these metrics using data from reviews conducted by the Office of Patent Quality Assurance using the Master Review Form.
  • Process Indicators assist in tracking the efficiency and consistency of internal processes.  Our current focus is on analyzing reopening of prosecution and rework of Office actions as well as improving consistency of decision making.
  • Perception Indicators use both internal and external stakeholder surveys to solicit information that can be used for root cause analysis and to validate/verify the other metrics.      
Step 2 Anchor

Product Indicators

Product indicators include metrics on the correctness and clarity of our work products.  We formulate these metrics using data from reviews on randomly-selected Office actions conducted by the Office of Patent Quality Assurance (OPQA) using the Master Review Form (MRF).

Correctness: We consider a quality patent to be one that is correctly issued in compliance with all the requirements of Title 35 as well as the relevant case law at the time of issuance.  A statutorily compliant Office action includes all applicable rejections and any asserted rejection is correct in that the decision to reject is based on sufficient evidence to support a conclusion of unpatentability.  Visit the Correctness Indicator page to learn more.

Clarity: We are currently working on developing clarity metrics to be publically disseminated.  As part of this, we are working to ensure that the data captured through the Master Review Form is as reliable as possible.  For example, we recently issued guidance to reviewers that defined average clarity as used in the Master Review Form as the level of clarity expected for Office actions from the great majority of examiners that is sufficient to allow anyone reviewing the Office action to readily understand the position taken.

Step 3 Anchor

Process Indicators

Our process indicators assist us in tracking the efficiency and consistency of internal processes.  Our current focus is on analyzing reopening of prosecution and rework of Office actions as well as improving consistency of decision making.  To do this, we are evaluating certain types of transactions in our Quality Index Report (QIR) to identify trends and examiner behaviors indicative of either best practices or potential quality concerns.

Rather than setting targets for the particular transactions, we are conducting a root-cause analysis on the trends and behaviors to either capture identified best practices or correct issues, as appropriate.  It is sometimes desirable for an examiner to reopen prosecution or issue a second non-final rejection, such as when adjusting a rejection in view of changes to the law resulting from a new court decision.  By conducting a root cause analysis that focuses on the underlying reasons for the given trends and behaviors, we will allow for reopenings and rework where appropriate while providing training to ensure examiners have the necessary skills and resources to be as efficient as possible.

To assist with the root-cause analysis, we plotted the given QIR transactions to more easily identify examiners who may be either performing best practices or may need additional assistance.  View the Process Indicator graphical results of this analysis below.

  • Consistency of Decision Making chart shows the absolute value of the percent deviation of the allowance rate of a primary examiner to similarly situated primary examiners in a Technology Center or Class (e.g., |(examiner rate – average rate of similarly situated examiner)/average rate of similarly situated examiners|).  The chart also shows the percentage of primary examiners falling between no deviation and a given percent deviation on the x-axis. We use this type of information to identify areas to investigate further through a root-cause analysis. 
     
  • Rework chart shows the number of examiners having a particular number of instances of rework (e.g., 2nd+ non-finals, consecutive finals, consecutive restrictions).  The chart also shows the percentage of examiners falling between no instances of rework and a given number of instances of rework.  We use this type of information to identify areas to investigate further through a root-cause analysis.  Such an analysis will assess whether the instances of rework are warranted (such as due to a change in the law resulting from a new court decision) or whether the examiner needs additional training or resources to be more efficient.  

  • Reopens chart shows the number of examiners having a particular number of re-openings (e.g., re-open after an appeal brief, re-open or allow after pre-appeal, re-open after final).  The chart also shows the percentage of examiners falling between no re-openings and a given number of re-openings.  We use this type of information to identify areas to investigate further through a root-cause analysis.  Such an analysis will assess whether the instances of reopenings are warranted (such as due to a change in the law resulting from a new court decision) or whether the examiner needs additional training or resources to be more efficient.   
Step 4 Anchor

Perception Indicators

We have conducted both internal and external stakeholder perception surveys semi-annually since 2006.  The internal survey is sent to 750 randomly selected patent examiners on a semi-annual basis and the external survey is sent to 3,000 of our frequent-filing customers. The results of these surveys are a vital quality indicator and they are useful for validating our other quality metrics.  For example, the results of the perception surveys assure alignment of the data underlying our metrics and our stakeholders’ perceptions and assure that the quality metrics we report are useful for our stakeholders.

View some of the results of the  External Stakeholder Perception Survey below.

  • Frequency of Technically, Legally, and Logically Sound Rejections chart shows the percentage of respondents reporting that the rejections they experienced were sound “most” or “all” of the time.
  • Percent Positive and Negative Ratings chart shows the percentage of respondents reporting that overall examination quality was “good” or “excellent” as well as the percentage of respondents reporting that overall examination quality was “poor” or “very poor”.
  • Percent Reporting “Good” or “Excellent” Quality of Prior Art chart shows the percentage of respondents reporting that overall examination quality was “good” or “excellent” as well as the percentage of respondents reporting that overall examination quality was “poor” or “very poor” by technology field.
  • Consistency chart shows the percentage of respondents who reported no inconsistency of examination quality from one examiner to another, who reported a small degree of consistency of examination quality from one examiner to another, and who reported a large degree of consistency of examination quality from one examiner to another.
Step 5 Anchor

Timeline

We have had quality metrics based on independent reviews of Office actions since at least 1983.  Our initial reviews focused solely on allowances. Over time, we included additional types of reviews to provide a more thorough understanding of the quality of our work products and processes.
 

► FY 1983

FY 2005 ►

FY 2007 ►

  • Reviews of Allowances begin
  • Reviews occur prior to the patent grant
     
  • Reviews of In-Process Office Actions begin
  • Includes non-finals and finals
  • External Quality Surveys begin

FY 2008 ►

FY 2010 ►

  • Quality Index Reports begin
  • Grouping Reviews into Final Disposition Reviews  and In-Process Reviews begin
  • Final Disposition Reviews include allowance and final office actions
  • In-Process Reviews include non-final office actions
     

FY 2011 ►

FY 2016 ►

  • Quality Composite Score begins, which is composed of seven (7) individual quality metrics
    1. In-Process Compliance Rate,
    2. Final Disposition Compliance Rate,
    3. Complete First Action on the Merits Review,
    4. First Action on the Merits Search Review,
    5. Quality Index Reporting (average of actions per disposal: RCEs as percent of total disposals;
      reopenings after final; second action non-finals; and restrictions after first action),
    6. External Quality Survey,
    7. Internal Quality Survey.
  • Enhanced Patent Quality Initiative (EPQI) Quality Metrics Program launches metrics approach with three (3) categories​
    1. Product Indicators
    2. Process Indicators
    3. Perception Indicators
Step 6 Anchor

Contact Us

If you have questions or comments about Quality Metrics at the USPTO, please send an email to QualityMetrics2017@uspto.gov.

Last Anchor