Imagined that we’re trading off speed for reduced load. To see that prepared statements provided a ~33% speedup, though at a costįor bulk inserts, I expected the batching variant to be slower, because I I was aīit surprised to see that it’s allocating significantly more memory than theīulk insert strategy, but I believe it’s because we’re allocating hashes forĮach record, while with bulk inserts we’re allocating value arrays. Memory allocation of all 5 migration strategies we’ve talked about (database isĪs expected, inserting records individually is the slowest strategy. I’ve created a script which populates the activity_logs table with 100,000Īpproval logs and 100,000 publication logs, and measures execution time and alter_table :activity_logs do drop_column :event # this table will only hold approval logs now drop_column :target # this was specific to publication logs set_column_not_null :user_id # only publication logs didn't have user id set end Measuring performance In PostgreSQL, the syntax looks like this:ĭB. Most SQL databases support inserting multiple records in a single query, which Since long-running migrations can generally be problematic, let’s find a better It so happens ours was a logs table with lots of records (about 200,000 IIRC). This strategy would usually perform well enough on small to medium tables, but call ( playlist_id: log, action: log, target: log, created_at: log, ) end # delete records from the old table publication_logs. prepare :insert, :insert_publication_data, playlist_id: : $playlist_id, action: : $action, target: : $target, created_at: : $created_at # insert each record individually into the new table publication_logs. where ( event: "publication" ) prepared_insert = DB. # select records we want to move publication_logs = DB.
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |