Last updated: 2020-03-13 14:22:08 GMT
There’s nothing here…
- Understanding Robots.txt and Sitemap.xml
- Adventures in HAProxy
- I Hate White Dwarf Stars
- Hackintosh Up and Ready!
- Automating Algolia Search Indexing
- You Should Play: Evochron Legacy
- Minor Rant: Linux, Scripts, and SetUID
- Hugo? More Like HuNO!
- RED ALERT: Emergency Laptop Brain Transplant
- That’s a New Level of Failure
Data Collection Notice
I’ll be building the Community Picks section off data from web accesses as reported by the webserver, which is not a 100% accurate measurement due to Cloudflare, but it’s the best I have. This data includes the request IP address (Cloudflare’s network, therefore it can’t be used to identify who’s requesting), request URL (kinda need that…), headers sent (which will most likely include the browser User Agent string, which can only be used to identify roughly what web browser is being used), and a few other things that aren’t important (response code, response size, processing timings through the proxy layer here).
In other words I will not have access to identifying information when filling out that section.
The method I’ll use for filling out is the top
10 (for now) posts, ordered by number of requests, where the actual post content page was requested, and the status code was
OK, no error), updated weekly.
Likewise, the Google’s Picks section will be built off data from the Google Search Console (which is not Google Analytics, which I do not use), which consists of the number of impressions a page has (the number of times it’s been shown in search results), and the number of clicks a page has (the number of times someone actually clicked it). Both these are completely aggregate statistics, and therefore also have no identifying information.
The method for this section will be the top
10 posts ordered by clicks, with impressions used as a tie-breaker.