This page has a large number of charts on the pandemic. In the box below you can select any country you are interested in – or several, if you want to compare countries.
All charts on this page will then show data for the countries that you selected.
This chart shows the number of confirmed COVID cases per day.
→ We provide more detail on these points in the section ‘Cases of COVID background‘.
Differences in the population size between different countries are often large – it is insightful to compare the number of confirmed cases per million people.
Keep in mind that in countries that do very little testing the actual number of cases can be much higher than the number of confirmed cases shown here.
This chart shows the cumulative number of confirmed cases per million people.
In this document, the many linked charts, our COVID Data Explorer, and the Complete COVID dataset, we report and visualize the data on confirmed cases and deaths from the World Health Organization (WHO). We make the data in our charts and tables downloadable as complete and structured CSV, XLSX, and JSON files on GitHub.
The WHO has published updates on confirmed cases and deaths on its dashboard for all countries since 31 December From 31 December to 21 March , this data was sourced through official communications under the International Health Regulations (IHR, ), complemented by publications on official ministries of health websites and social media accounts. Since 22 March , the data has been compiled through WHO region-specific dashboards or direct reporting to WHO.
The WHO updates its data once per week.
In epidemiology, individuals who meet the case definition of a disease are often categorized on three different levels.
These definitions are often specific to the particular disease, but generally have some clear and overlapping criteria.
Cases of COVID – as with other diseases – are broadly defined under a three-level system: suspected, probable and confirmed cases.
Typically, for a case to be confirmed, a person must have a positive result from laboratory tests. This is true regardless of whether they have shown symptoms of COVID or not.
This means that the number of confirmed cases is lower than the number of probable cases, which is in turn lower than the number of suspected cases. The gap between these figures is partially explained by limited testing for the disease.
We have three levels of case definition: suspected, probable and confirmed cases. What is measured and reported by governments and international organizations?
International organizations – namely the WHO and European CDC – report case figures submitted by national governments. Wherever possible they aim to report confirmed cases, for two key reasons:
1. They have a higher degree of certainty because they have laboratory confirmation;
2. They help to provide standardised comparisons between countries.
However, international bodies can only provide figures as submitted by national governments and reporting institutions. Countries can define slightly different criteria for how cases are defined and reported.3 Some countries have, over the course of the outbreak, changed their reporting methodologies to also include probable cases.
One example of this is the United States. Until 14th April the US CDC provided daily reports on the number of confirmed cases. However, as of 14th April, it now provides a single figure of cases: the sum of confirmed and probable cases.
Suspected case figures are usually not reported. The European CDC notes that suspected cases should not be reported at the European level (although countries may record this information for national records) but are used to understand who should be tested for the disease.
The number of confirmed cases reported by any institution – including the WHO, the ECDC, Johns Hopkins and others – on a given day does not represent the actual number of new cases on that date. This is because of the long reporting chain that exists between a new case and its inclusion in national or international statistics.
The steps in this chain are different across countries, but for many countries the reporting chain includes most of the following steps:
This reporting chain can take several days. This is why the figures reported on any given date do not necessarily reflect the number of new cases on that specific date.
To understand the scale of the COVID outbreak, and respond appropriately, we would want to know how many people are infected by COVID We would want to know the actual number ofcases.
However, the actual number of COVID cases is not known. When media outlets claim to report the ‘number of cases’ they are not being precise and omit to say that it is the number of confirmed cases they speak about.
The actual number of cases is not known, not by us at Our World in Data, nor by any other research, governmental or reporting institution.
The number of confirmed cases is lower than the number of actual cases because not everyone is tested. Not all cases have a “laboratory confirmation”; testing is what makes the difference between the number of confirmed and actual cases.
All countries have been struggling to test a large number of cases, which means that not every person that should have been tested has been tested.
Since an understanding of testing for COVID is crucial for an interpretation of the reported numbers of confirmed cases we have looked into the testing for COVID in more detail.
You find our work on testing here. In a separate post we discuss how models of COVID help us estimate the actual number of cases.
We would like to acknowledge and thank a number of people in the development of this work: Carl Bergstrom, Bernadeta Dadonaite, Natalie Dean, Joel Hellewell,Jason Hendry, Adam Kucharski, Moritz Kraemer and Eric Topol for their very helpful and detailed comments and suggestions on earlier versions of this work. We thank Tom Chivers for his editorial review and feedback.
And we would like to thank the many hundreds of readers who give us feedback on this work. Your feedback is what allows us to continuously clarify and improve it. We very much appreciate you taking the time to write. We cannot respond to every message we receive, but we do read all feedback and aim to take the many helpful ideas into account.
The European CDC discusses the criteria for what constitutes a probable case, and a ‘close contact’ here.
See any Situation Report by the WHO – for example Situation Report
The WHO also speaks of ‘suspected cases’ and ‘probable cases’, but the WHO Situation Reports do not provide figures on ‘probable cases’, and only report ‘suspected cases’ for Chinese provinces (‘suspected cases’ by country is not available).
In Situation Report 50 they define these as follows:
Suspect case
A. A patient with acute respiratory illness (fever and at least one sign/symptom of respiratory disease (e.g., cough, shortness of breath), AND with no other etiology that fully explains the clinical presentation AND a history of travel to or residence in a country/area or territory reporting local transmission (See situation report) of COVID disease during the 14 days prior to symptom onset.
OR
B. A patient with any acute respiratory illness AND having been in contact with a confirmed or probable COVID19 case (see definition of contact) in the last 14 days prior to onset of symptoms;
OR
C. A patient with severe acute respiratory infection (fever and at least one sign/symptom of respiratory disease (e.g., cough, shortness breath) AND requiring hospitalization AND with no other etiology that fully explains the clinical presentation.
Probable case
A suspect case for whom testing for COVID is inconclusive. Inconclusive being the result of the test reported by the laboratory.
The US, for example, uses the following definitions: “A confirmed case or death is defined by meeting confirmatory laboratory evidence for COVID A probable case or death is defined by i) meeting clinical criteria AND epidemiologic evidence with no confirmatory laboratory testing performed for COVID; or ii) meeting presumptive laboratory evidence AND either clinical criteria OR epidemiologic evidence; or iii) meeting vital records criteria with no confirmatory laboratory testing performed for COVID”
Our articles and data visualizations rely on work from many different people and organizations. When citing this topic page, please also cite the underlying data sources. This topic page can be cited as:
BibTeX citation
All visualizations, data, and code produced by Our World in Data are completely open access under the Creative Commons BY license. You have the permission to use, distribute, and reproduce these in any medium, provided the source and authors are credited.
The data produced by third parties and made available by Our World in Data is subject to the license terms from the original third-party authors. We will always indicate the original source of the data in our documentation, so you should always check the license of any such third-party data before use and redistribution.
All of our charts can be embedded in any site.
Web Vitals is a Google initiative to provide unified guidance for web page quality signals that are essential to delivering a great user experience on the web. It aims to simplify the wide variety of available performance-measuring tools, and help site owners focus on the metrics that matter most, the Core Web Vitals.
Core Web Vitals are the subset of Web Vitals that apply to all web pages, should be measured by all site owners, and are surfaced across all Google tools. Each of the Core Web Vitals represents a distinct facet of the user experience, is measurable in the field, and reflects the real-world experience of a critical user-centric outcome.
The metrics that make up Core Web Vitals will evolve over time. The current set focuses on three aspects of the user experience: loading, interactivity, and visual stability. It includes the following metrics:
For each of these metrics, to ensure you're hitting the recommended target for most of your users, a good threshold to measure is the 75th percentile of page loads, segmented across mobile and desktop devices.
Tools that assess Core Web Vitals compliance should consider a page compliant if it meets the recommended targets at the 75th percentile for each of these three metrics.
Note: To learn more about the research and methodology behind these recommendations, see Defining the Core Web Vitals metrics thresholds.Metrics on the Core Web Vitals track go through a lifecycle consisting of three phases: experimental, pending, and stable.
Each phase is designed to signal to developers how they should think about each metric:
The Core Web Vitals are at the following lifecycle stages:
For more information about the development of INP, see Advancing Interaction to Next Paint.
When a metric is initially developed and enters the ecosystem, it is considered an experimental metric.
The purpose of the experimental phase is to assess a metric's fitness, first by exploring the problem to be solved, and possibly iterating on what previous metrics might have failed to address. For example, INP was initially developed as an experimental metric to address the web's runtime performance issues more comprehensively than First Input Delay (FID).
The experimental phase of Core Web Vitals lifecycle is also intended to give flexibility in a metric's development by identifying bugs and even exploring changes to its initial definition. It's also the phase in which community feedback is most important.
When the Chrome team determines that an experimental metric has received sufficient feedback and proven its efficacy, it becomes a pending metric. Pending metrics are held in this phase for a minimum of six months to give the ecosystem time to adapt. Community feedback remains an important aspect of this phase, as more developers begin to use the metric.
When a Core Web Vital candidate metric is finalized, it becomes a stable metric. This is when the metric can become a Core Web Vital.
Stable metrics are actively supported, and can be subject to bug fixes and definition changes. Stable Core Web Vitals metrics won't change more than once per year. Any change to a Core Web Vital will be clearly communicated in the metric's official documentation, as well as in the metric's changelog. Core Web Vitals are also included in any assessments.
Key point: Stable metrics aren't necessarily permanent. A stable metric can be retired and replaced by another metric that addresses the problem area more effectively.
Google believes that the Core Web Vitals are critical to all web experiences. As a result, it is committed to surfacing these metrics in all of its popular tools. The following sections details which tools support the Core Web Vitals.
The Chrome User Experience Report collects anonymized, real user measurement data for each Core Web Vital. This data allows site owners to quickly assess their performance without requiring them to manually set up analytics for their pages, and powers tools like PageSpeed Insights and Search Console's Core Web Vitals report.
Note: For guidance on how to use these tools, and which tool is right for your use case, see Getting started with measuring Web Vitals.The data provided by Chrome User Experience Report offers a quick way to assess site performance, but it doesn't provide the detailed, per-pageview telemetry that's often necessary to accurately diagnose, monitor, and quickly react to regressions. As a result, we strongly recommend that sites set up their own real-user monitoring.
Each of the Core Web Vitals can be measured in JavaScript using standard web APIs.
Note: The Core Web Vitals measured in JavaScript using public APIs can differ from the Core Web Vitals reported by CrUX. For more information, see Why is CrUX data different from my RUM data?.The easiest way to measure all the Core Web Vitals is to use the JavaScript library, a small, production-ready API wrapper that measures each metric in a way that accurately matches how the Google tools report them.
With the library, measuring each metric is as simple as calling a single function (see the documentation for complete usage and API details):
After you configure your site to use the web-vitals library to measure and send your Core Web Vitals data to an analytics endpoint, the next step is to aggregate and report on that data to see if your pages are meeting the recommended thresholds for at least 75% of page visits.
While some analytics providers have built-in support for Core Web Vitals metrics, even those that don't should include basic custom metric features that allow you to measure Core Web Vitals in their tool.
One example of this is the Web Vitals Report, which allows site owners to measure their Core Web Vitals using Google Analytics. For guidance on measuring Core Web Vitals using other analytics tools, see Best practices for measuring Web Vitals in the field.
You can also report on each of the Core Web Vitals without writing any code using the Web Vitals Chrome Extension. This extension uses the library to measure each of these metrics and display them to users as they browse the web.
This extension can be helpful in understanding the performance of your own sites, your competitor's sites, and the web at large.
Developers who prefer to measure these metrics directly using the underlying web APIs can instead use these metric guides for implementation details:
For additional guidance on measuring these metrics using popular analytics services or your own in-house analytics tools, see Best practices for measuring Web Vitals in the field.
While all of the Core Web Vitals are, first and foremost, field metrics, many of them are also measurable in the lab.
Lab measurement is the best way to test the performance of features during development. It's also the best way to catch performance regressions before they happen.
The following tools can be used to measure the Core Web Vitals in a lab environment:
Tools like Lighthouse that load pages in a simulated environment without a user can't measure FID because they don't have user input. However, the Total Blocking Time (TBT) metric is lab-measurable and is an excellent proxy for FID. Performance optimizations that improve TBT in the lab should improve FID in the field. For more guidance, see Recommendations for improving your scores.
Although lab measurement is an essential part of delivering great experiences, it is not a substitute for field measurement. A site's performance can vary dramatically based on a user's device capabilities, their network conditions, what other processes may be running on the device, and how they're interacting with the page. In fact, each of the Core Web Vitals metrics can have its score affected by user interaction. Only field measurement can accurately capture the complete picture.
The following guides offer specific recommendations for how to optimize your pages for each of the Core Web Vitals:
Although the Core Web Vitals are the critical metrics for understanding and delivering a great user experience, there are also other vital metrics.
These other Web Vitals often serve as proxy or supplemental metrics for the Core Web Vitals, to help capture a larger part of the experience or aid in diagnosing a specific issue.
For example, Time to First Byte (TTFB) and First Contentful Paint (FCP) are both vital aspects of the loading experience, and both are useful in diagnosing issues with LCP (slow server response times or render-blocking resources, respectively).
Similarly, a metric like Total Blocking Time (TBT) is a vital lab metric for catching and diagnosing potential interactivity issues that can impact FID and INP. However, it isn't part of the Core Web Vitals set because it's not field-measurable, and doesn't reflect a user-centric outcome.
Web Vitals and Core Web Vitals represent the best available signals developers have today to measure quality of experience across the web, but these signals aren't perfect and future improvements or additions should be expected.
The Core Web Vitals are relevant to all web pages and featured across relevant Google tools. Because changes to these metrics have wide-reaching impact, developers should expect the definitions and thresholds of the Core Web Vitals to be stable, as well as prior notice and a predictable schedule for updates.
The other Web Vitals are often context or tool specific, and can be more experimental than the Core Web Vitals. As such, their definitions and thresholds might change with greater frequency.
For all Web Vitals, changes are documented in this public changelog.
аналитика форекс gbp кaртa мирa форекс вспомогательные индикаторы форекс как платят налоги трейдеры валютного рынка форекс лучшие индикаторы для входа индикаторы измерения температуры щитовые дмитрий котенко форекс клипaрт для форекс имхо на форексе дц форекс брокер отзывы безрисковая комбинация форекс индикаторы рынка ферросплавов