Opened 6 years ago

Closed 3 years ago

Last modified 3 years ago

#386 closed Task (invalid)

Dupe php, mysql, massive data set

Reported by: matt Owned by:
Priority: major Milestone: Future releases
Component: Core Keywords: performance fast scalability high traffic
Cc: mauser Sensitive: no

Description (last modified by matt)

SEE NEW TICKET AT: #1999

This one was closed


This ticket is a place holder for all performance related notes/thoughts/feedback.

There are already a number of interesting performance improvements tickets
Here is the list:

  • Piwik creates one cookie per install which is not scalable, we should build a server side DB based cookiestore: #409
  • Piwik logs should be rotated into a yearly table: #5
  • Bulk load Piwik logs (with a documented API) which would improve tracking performance, and make it easy to do performance testing #134
  • Fix memory leak error during Piwik archiving task for larger piwik setup #766
  • All Websites dashboard should work when Piwik has thousands of registered websites (currently only scales to a few hundreds websites) #1077

Building a regression and performance testing environment for Piwik

Partly described in #134.

The objective of this project is to build a automatic and reusable performance testing infrastructure, that would make it easy to generate lots of hits and big data sets

  • Setup performance testing server (called "preprod") with monitoring
  • Replaying logs from previous days (see #134)
  • Use preprod to have a precise idea of load that Piwik can handle (pages per day, per month), the size of data in mysql
  • Use preprod to run Archiving and determine where Archiving is slow, or fails (via profiling).
  • Use preprod to optimize the tracking mechanism (denormalize tables, review index strategy)
  • Implement quick wins and plan for bigger changes 1 day

Once we have a system to assess performance, we could answer in a specific documentation page a few of the more common questions

  • Does it use a lot of bandwith on your site?
  • Does it take a lot of space/memory in the MySQL database?
  • Does it make your site load (and work) slower?
  • Expected DB size for a website with 1k, 10k, 100k visits and 1M pages. List server configuration required to run Piwik: on a shared server / on a dedicated server. List archiving processing time examples for large traffic.
  • Suggest best setting configuration for small & medium (default config) and higher traffic installs

Interesting read on scalability & performance

To study



  • Monitoring tools: nagios, mrtg, puppet, munin, ganglia.sourceforge.net (graph across multiple servers)

Change History (47)

comment:2 Changed 6 years ago by matt (mattab)

  • Description modified (diff)

o DB sharding

+ This is expected to be the most useful and powerful feature. This is a required step to build a scalable Piwik: being able to partition the data horizontally on different mysql servers.

+ The first step would be to isolate a DB layer in Piwik, make sure that all database queries are located inside disctinct classes. We can then easily change the target server of these sql queries depending on the idsite. For example, all sites from id=1 to id=1,000 would go on server1, all sites from id=1001 to id=2001 would go to server2

+ we want to keep all the logic in php/mysql so we have to make the partition/server lookup in the code (in the DB layer)

o Tools to manage sharding

+ when a website is too big to fit on existing shard, how do we move to other shard? This will happen. We can't afford downtime in the logging process

+ Moving the partitions from one server to another.

o Parallelize archiving process (for example, archiving siteA and siteB can be done in parallel)

o At some point, archiving data for a single website could possibly take several hours... can we split this process and do it every hour instead? Then sum the data for a the 24hours? (this is quite a big change in the php logic code)

comment:3 Changed 6 years ago by matt (mattab)

getDateStart getDateEnd in *period are not optimized. could somehow be cached.

comment:5 Changed 5 years ago by matt (mattab)

when the plugins are not used in the piwik.php logging script, don't load the related files
(that was #19)

comment:8 Changed 5 years ago by matt (mattab)

  • Cc mauser added

Find memory leaks in PHP

If the image processing extension uses emalloc() style allocation, then you can compile php with --enable-debug and you will get a report of all leaks and their location at the end of the script. Note that you must finish with "return" or nothing to get the report, not with "exit".

But that won't pick up malloc() leaks. For that though you can use http://www.valgrind.org if you are running under unix.

https://www.zend.com/forums/index.php?t=msg&goto=13062&S=7f75627561a92cfe442aaed40c3306eb

xdebug.show_mem_delta
Type: integer, Default value: 0
When this setting is set to something != 0 Xdebug's human-readable generated trace files will show the difference in memory usage between function calls. If Xdebug is configured to generate computer-readable trace files then they will always show this information.

http://www.xdebug.org/docs/execution_trace

comment:9 Changed 5 years ago by matt (mattab)

One other idea would be to remove the count(distinct idvisitor) in the archiving query. other products like GA don't give the count of unique for each metric; that could eventually be a setting to decide to count unique or not.

we would still count uniques for the global stat, per month, week, day.

comment:15 Changed 5 years ago by vipsoft (robocoder)

  • Type changed from Bug to Task

comment:16 Changed 5 years ago by vipsoft (robocoder)

  • Milestone changed from DigitalVibes to Stable release

comment:17 Changed 5 years ago by matt (mattab)

  • Description modified (diff)
  • Summary changed from Performance improvement to Scaling and Performance improvement - php, mysql, massive data set

comment:18 Changed 5 years ago by matt (mattab)

  • Description modified (diff)

comment:19 Changed 5 years ago by matt (mattab)

  • Summary changed from Scaling and Performance improvement - php, mysql, massive data set to Scaling Piwik - Performance improvement - php, mysql, massive data set

comment:20 Changed 5 years ago by klando

comment:22 in reply to: ↑ 21 ; follow-up: Changed 5 years ago by klando

comment:23 in reply to: ↑ 22 ; follow-up: Changed 5 years ago by klando

comment:24 in reply to: ↑ 23 Changed 5 years ago by klando

I have set up a git branch at github to help port piwik to postgresql :

http://github.com/klando/pgpiwik/tree/svn-merge

Edit : I made a mistake with branch naming.

The github is here : http://github.com/klando/pgpiwik/tree/master

Just do that to grab it : git clone git://github.com/klando/pgpiwik.git

comment:25 follow-up: Changed 5 years ago by matt (mattab)

see also #620: Piwik should use autoload to automatically load all classes intead of using require_once

comment:26 in reply to: ↑ 25 Changed 5 years ago by klando

comment:27 Changed 5 years ago by alivenk

comment:28 Changed 5 years ago by alivenk

comment:29 Changed 5 years ago by domtop

comment:30 Changed 5 years ago by koteiko

comment:31 Changed 4 years ago by Marcox

comment:32 Changed 4 years ago by jasper_van_wanrooy

comment:33 Changed 4 years ago by matt (mattab)

  • Description modified (diff)
  • Sensitive unset

comment:34 Changed 4 years ago by plandem

comment:35 Changed 4 years ago by matt (mattab)

plandem, see the thread on piwik-hackers for some thinking around alternative nosql databases in piwik: http://lists.piwik.org/pipermail/piwik-hackers/2010-February/000829.html

comment:37 Changed 4 years ago by matt (mattab)

Infinidb sounds like something we should definitely investigate first, as it might be much (much) easier to use with the current Piwik architecture. Are there limitations when "dropping it" in instead of mysql?

comment:38 Changed 4 years ago by vipsoft (robocoder)

There's now a migration guide for InfiniDB. The relevant section starts at page 17.

The only limitation to the open source, community edition is the limit to a single machine (not CPUs, RAM, or concurrent users). Theoretically, you can build a fairly powerful box (think: multi-core, multi-processor boards) before you have to think about adding nodes (and license fees for the enterprise edition).

comment:39 Changed 4 years ago by jtommaney@…

It looks like you have a good handle on where to start looking at InfiniDB. I took a quick look at your schema and don't see any fundamental problem with the log_visit fact table or the queries that reference it above. We do not (yet) support blob for your archive tables. The only other quick note I would add is that we aren't optimal for web/oltp style loads. However you could easily select * into outfile from existing schema, load data infile into InfiniDB to get good load rates, or use select into outfile, + cpimport (our bulk load) to get excellent load rates. 100k or more rows/second possible but will vary significantly based on disk and table definition.

One additional note, our current parallization distributes ranges of 8 million rows to each thread, so smaller tables won't show the same benefits from many cores as larger tables.

Anyway, currently signed up to follow this discussion, let me know if you have any questions or comments. Thanks - Jim Tommaney

comment:41 Changed 4 years ago by matt (mattab)

  • Description modified (diff)

comment:42 Changed 4 years ago by matt (mattab)

  • Description modified (diff)

comment:43 Changed 4 years ago by matt (mattab)

  • Description modified (diff)

comment:44 Changed 4 years ago by matt (mattab)

  • Milestone changed from 4 - Piwik 1.0 - Stable release to Features requests - after Piwik 1.0

we will tackle critical issues (#409, probably #1077), and postpone others to post 1.0

comment:45 Changed 3 years ago by matt (mattab)

It might be good to look into storing json encoded data tables rather than serialized php tables. This would improve portability. See http://stackoverflow.com/questions/1306740/json-vs-serialized-array-in-database as reference. Speed of json decoding large arrays VS unserialize should be tested.

comment:46 Changed 3 years ago by matt (mattab)

  • Resolution set to invalid
  • Status changed from new to closed
  • Summary changed from Scaling Piwik - Performance improvement - php, mysql, massive data set to Dupe php, mysql, massive data set

I created a summary ticket from this one, as this ticket became unclear. See #1999

comment:47 Changed 3 years ago by matt (mattab)

  • Description modified (diff)
Note: See TracTickets for help on using tickets.