Tech docs
Integration Docs


#How do I know if I've installed the tag correctly?

You can validate the presence of the tracking code and that the metadata is correctly specified by checking the URL with the Validator.

#Will this tag break or slow down my site?

Our JavaScript code is written so that it is the very last script that loads on your page.

We host our DNS servers ( with a global, distributed DNS system, which has no downtime and extremely fast resolution times across the globe. Our JavaScript code itself is hosted in a global content delivery network (CDN) with edge locations in every global region.

Our code itself is optimized to not impact user experience in any unintended way. It also automatically leverages asynchronous JavaScript loading technologies in newer browsers. For older browsers, we use a sophisticated loading process that loads a few bytes of data from a high-speed CDN (a "bootstrap" file) and then uses an asynchronous JavaScript loading library (the LABjs library) to ensure that none of our other assets block your page load.

Once our tracking code is loaded, we install a small cookie (just a user ID) and asynchronously beacon back information to our analytics server. If our analytics server is down, all that happens is that the actions are no longer tracked -- there is no impact on the user experience otherwise.

Our team is staffed by JavaScript experts who know the pain and frustration with crappy third-party JavaScript integrations. We therefore have taken a lot of care to make our tracking code integration a no-brainer, safe decision.

Check out our public Pingdom report showing our uptime and global response times.

#How does approach consumer and customer privacy?

We have an entire page dedicated to the topic of privacy. In short, we have a number of privacy controls available to customers, and we believe in analytics and privacy without compromise.

#What is a " canonical URL"?

This is the URL that considers to be the source of truth for the metadata about a particular post or page. A single piece of content may have multiple URLs associated with it, but will only retrieve content from the one designated as the canonical URL. This also allows us to aggregate data together across all URLs that share a common canonical URL. For more details about how this works, read about how the Crawler works.

Note that the criteria uses to identify the canonical URL differs from the common usage of the term, in that we rarely rely on the value of the <link rel="canonical"> tag. For information on how to properly set the canonical URL, please see our metadata documentation.

#What are the involved servers?

There are three hosts used when JavaScript is loaded on your servers:

  • serves our JavaScript assets from a Content Delivery Network
  • srv-* configures publisher-specific settings
  • srv-* receives event data (aka pixel data) from JavaScript code

All of these hosts are distributed across multiple, global web servers and are also behind high-performance load balancers.

#How does handle staging/development sites?

It's common for websites to use a staging (also called "development") environment to test code and preview posts before they're released publicly. Often, these sites are implemented as subdomains of the site domain.

Including the JavaScript with a production Site ID on such a staging site can cause problems with the numbers that reports, the validator tool, the link between URLs and posts in's analytics engine, and more. This is because staging URLs often disappear or change without warning, and are typically not accessible to the public.

Reach out to to receive a sandbox Site ID that you can use for testing purposes.

#How are 'infinite-scroll' pages supported?

For some sites, the model of one tracking pixel per pageload isn't a perfect fit. This can happen if your site includes multi-page posts or galleries and you'd like each page of the post to send a pageview to This situation also arises for sites that use "infinite scroll" - one webpage that continuously loads new posts as the user scrolls down.

Instead, you can use the JavaScript API to manually send pageload information to The relevant call is PARSELY.beacon.trackPageView.

You might call this function when the user navigates to the next slide, or when they scroll down and a new article is loaded.

#How do I integrate on my mobile site?

In general, integration for mobile web is identical to integration for desktop browsers. See the basic integration instructions for details.

#What about native iOS or Android mobile apps?

We have open source iOS and Android toolkits for integrating tracking on these platforms.

#How is HTTPS supported?'s tracker fully supports tracking on HTTPS pages. When tracking under HTTPS,'s tracking tag automatically adapts the set of hosts used to ones with valid HTTPS certificates.

#How does tracking work?

Upon a visit to a page, a code bundle is downloaded from's global content delivery network. This code bundle retrieves information about the visit to that page, such as pageview and time spent. When combined with metadata, information about your site streams into the dashboard and APIs.

#What data is sent by default?

Several pixel fields are critical for to function at a basic level:

  • url: current URL
  • urlref: HTTP referrer (traffic source); can be blank
  • action: action name (defaults to pageview)
  • data: contains information on the UUID from cookie state
  • idsite: the "site identifier", aka apikey, for the publisher

Other fields are also helpful, such as:

  • title: page title
  • screen: screen resolution information
  • date: client-side datetime in the browser

These are sent in a standard HTTP request, which also includes client information such as the browser User-Agent, client IP address, and third-party ( cookie settings.

#What version of the tracker is currently on my site?

When troubleshooting, it can be useful to discover the version number of the currently integrated tracker code.

Open the Chrome DevTools Console (or a similar tool) on your webpage. Type PARSELY.version to print the version number of the tracker, which will look something like "1.3.0".

If your site uses a legacy integration, PARSELY.version may be undefined. In that case, type or copy/paste PARSELY.config.bundle.match(/[\d\.]+/)[0] into the console instead.

#How does the service work with cookies?

A separate service generates a Universally Unique Identifier, or UUID, for the user. It stores this in a first-party cookie on your domain. It also stores another cookie on your domain for the purpose of "sessionization", or the association of several independent actions with a single visitor. These cookies are called _parsely_visitor and _parsely_session, respectively.

Our system also attempts to create a third-party cookie for the purpose of benchmarking your aggregated and anonymized visitor statistics against other publishers. This cookie is called parsely_network_uuid and is set on the domain.

Users are able to opt out of our third-party cookie by visiting our privacy policy page. The opt-out is implemented by setting the third-party cookie's value to OPTOUT, which instructs the rest of our system to disable analytics on that user.

For compliance with special regulations regarding cookies (e.g. in the EU region), we can enable special settings on your account; please contact us for more details on this.

The debug fields in the JavaScript API PARSELY.lastRequest and PARSELY.config will allow you to inspect the cookie information sent by's JavaScript code.

#How does unique visitor counting work?

Our system stores a site-specific user identifier for the purpose of showing aggregated "unique visitor" counts in products. These visitor counts are stored in a format that divides them into "new" and "returning" visitors, as well as a format that combines new and returning into a generic "visitor" bucket.

In many cases, aggregate new and returning visitor counts will not add up exactly to the combined visitor count. There are two basic reasons for this.

Primarily, it is possible and common for the same user to be both new and returning within a given aggregation period. Imagine a user who visited your site for the first time ever yesterday, then came back again today. On their first visit, they were considered "new" by our system, and on the second visit they were considered "returning". Thus, when looking at new and returning visitor totals for the last two days in the dashboard, this user will be counted as both new and returning. This causes the sum of new and returning visitors to be greater than the combined visitor count stored in the database, since the combined count only counts each visitor once.

The other factor strongly affecting the summability of visitor counts is the way they're queried internally in our system. For query and storage efficiency, these sets of UUIDs are queried as approximate counts using an algorithm called HyperLogLog++. This algorithm trades a small amount of accuracy in counting unique visitors for query speed, meaning that the dashboard is able to show visitor counts alongside the rest of its realtime data. A side effect of this is a small error rate in the counts of new visitors, returning visitors, and total visitors. Thus, summing new and returning visitors is not expected to result in exactly the total visitor count, which itself contains some amount of inaccuracy. Rest assured, though, that the error rate incurred by this algorithm is small, usually hovering around 2%, and that approximate counting of unique visitors is in line with the industry standard.

#Aside from the dashboard, how do I access the data sent to

The dashboard includes a number of data access mechanisms, including:

  • Excel/CSV exports on any listing screen
  • Excel/CSV/HTML exports in the reporting suite

In addition to these dashboard-based data exports, you can also export data from via our HTTP/JSON API. This API is good for building site or CMS integrations. You can look at our API reference and API browser for usage examples.

If you need raw access to content/audience engagement data, including every unsampled event, you can license our Raw Data Pipeline. This will let you access compressed JSON files representing every event sent to via a secure Amazon S3 Bucket (bulk access) or Amazon Kinesis Streams (real-time streaming access). Our team can also help you integrate this data with open source and cloud technologies like Python, Spark, Google BigQuery, and Amazon Redshift.

rocket emoji