Log in

No account? Create an account


Recent Entries

You are viewing the most recent 10 entries

December 22nd, 2014

11:40 am: About People and Poverty

I want to tell a story, and I’m sure most people won’t like it for one reason or another. If you stop reading after the first paragraph or two, I won’t blame you. It’s hard to read, and says a lot of bad things about humanity. But I like to think that it also provides necessary perspective that helps society see where it needs to improve.

It’s about my family.

In a lot of ways, we’re not good people. My uncles are unashamedly racist, my aunt is a master manipulator, and my mother is almost incapable of supporting herself or finishing anything. All of them have one or several psychological problems, stemming from depression and bipolarism, body dysmorphic disorder, or outright schizophrenia. Every single one. And it’s no accident.

It all started with my grandmother, who by all accounts, is one of the most terrible people I’ve ever met. Her second husband, you see, was a pedophile and a rapist, as my mother and her sister discovered before they were even ten years old. The first time she was raped, my mother made the fateful mistake of telling her mother. She was betrayed in the worst way a child can be. Her mother accused her of being a husband-stealing slut, and broke her nose.

Ever since that day, she fostered a simmering hate for my mother; mom’s childhood and everything after it suffered tremendously. From that point on, my mother would learn she was worthless, could do nothing right, and was beaten frequently for something as innocuous as improperly washing a dish. My aunt saw what happened and learned from my mother’s mistake, and thus began her own defense mechanism of influencing situations to avoid a similar fate. My uncles were faced with an impossible amount of cognitive dissonance: believe their parents were terrible people that would abuse innocent children, or that the children somehow deserved it. Their own beatings at childish mistakes helped decide which interpretation was correct.

Thus began a legacy that continues to this day. My grandmother was a lifelong smoker and finally died at the age of 72, but the damage she caused lives on. I’m honestly surprised I ever met her, but my mother was just as damaged as the rest of the kids. Her belief that her suffering was deserved, meant she saw her past as just another family quirk. I like to think my hate for her mother rubbed off on her, but I’ll never really know for sure. Abuse can be fantastically enduring, and people are weak to indoctrination.

As a result, my mother didn’t really know how to raise me. Were I a less precocious child, I probably would have been deep into the drug scene, not to mention rebellious and resentful of everything and everyone for having a better life. Instead, I was incensed by our situation to study and dig my way out of poverty by any means necessary. But in the meantime the damage, as I said, was enduring. My mother had very little confidence, no higher education, and no parental advice on how to advocate herself or work within the system. She was basically trained to be poor, because society demands adherence to somewhat strict rules necessary for smooth operation. Anyone who doesn’t fit that mold is discarded.

So we lived in trailers. Or with her boyfriends. Or, for a while, in a car. At one point, I slept on a couch for about a year when I was five. Mom would work multiple jobs, and since she couldn’t always afford a babysitter, sometimes I’d spend hours just hanging out at the 7-11 where she worked. It wasn’t uncommon for me to sleep in the stock room in the back. I can only imagine how impotent my mother must have felt back then, being forced to raise her child that way. But the social safety net is thin, and she was loath to use it more than necessary. If anyone is curious as to my vehement anger at our treatment of the poor, it starts here.

But that wasn’t all. Because her self confidence was so terrible, and the fact she never learned what made a healthy relationship, she dated and married a string of verbally or physically abusive alcoholics. My heart condition made me a very weak and sickly child. These men told me that I would be better off dead because I was sick all the time, or rubbed my nose in soiled underwear because I had a weak bladder before I was three. My father abandoned us before I was born, and isn’t even listed on my birth certificate. I’ve never met the man, nor do I even know his name. As a result, mom never received any form of child support, not that he would have paid it, given his demonstrated lack of responsibility.

Now my mom is in her late 50′s. Due to her age, lack of higher education and marketable skills, and fragmented or absent work history, she has almost no job prospects. She’s been subsisting for the last twenty years on side jobs and whatever she can make by working out of her home as a seamstress. As a result, she has no retirement savings, can’t pay her rent regularly, and can barely afford food. She just told me she finally broke down and went to a food bank last week.

I help when I can. Rent is expensive these days, and I already have my own to worry about. But I make sure she has a roof over her head, as dismal as a run-down one-room Chicago basement apartment can be. Her Chicago Ventra card is always loaded. When she asks, I help with bills, though I grumble a bit. I feel bad about giving her lectures about finances, knowing her irresponsibility and lack of foresight isn’t wholly her fault, but I’m only human. This is the world we live in.

If you’ve ever wondered why I’m painfully practical, am prone to completely unexpected bouts of anger, and complain about almost every dime I spend, there’s a long and sordid history behind it. Age and perspective have tamed some of that, but in a properly constructed society where people actually cared about each other, either my mother or I would have been intercepted before our problems became so deeply ingrained. It’s almost as if society is driven to facilitate the implements of its own demise.

In another universe, I could have been much different. Without a heart condition, perhaps I would have had enough energy to violently rebel against my situation, making trouble for everyone in my wake. I did that anyway, but only half-heartedly due to my reluctance to permanently damage my future prospects. One of my uncles wasn’t so restrained; given his past, he cared about little, and got in trouble and fights constantly into his 20′s. Yet it didn’t have to be that way. If the community cared, and really paid attention, all of my grandmother’s children would have long been relocated to good homes before the damage was irreversible.

Instead the problem was ignored and propagated to yet another generation, and then another. Mine is hardly an isolated case, regardless of how it might seem. People like me get labeled as Trailer Trash, meth heads, or shiftless poor. People that fall through the cracks garner only resentment instead of the assistance they need to prevent the rot from corrupting our society. Our careless and cavalier attitude compounds the situation and fosters nothing but resentment between our artificial castes. It’s a troublesome brew that I’ve seen both sides of, and even though I’m middle class now, I vehemently hate our society and everything it represents.

In my mind, I’ll always be poor and wondering where my next meal will come from. I’ll always be that little kid who didn’t get a deadly heart condition corrected until I was six because we couldn’t afford the surgery. I’ll always be skeptical and leery of everyone’s intentions, and internally scream “Fuck you for judging me!” at every perceived slight. My rage will never be sated, no matter how collected I may seem on the surface.

Now imagine I was still poor. Imagine I still had nothing to lose. Imagine the damage I could do if I were backed into a corner. Just visit a poor neighborhood, and you don’t have to imagine. The resentment is almost palpable, and threatens to boil over at any provocation. I don’t condone when these people lash out, but I understand their motivation. Desperation is a dangerous thing, and every time we cut the social safety net in the name of fiscal responsibility, we bring the knife closer to our own throats.

My mom, my family, and I are merely symptoms of a much more insidious underlying problem. I wish I had an answer to any of this, but I don’t. But if we can’t even acknowledge the symptoms, we can hardly begin to develop a cure. It really does take a village to raise a child, and unfortunately, our village needs to stop and examine the children it’s been raising. We would all benefit from a little more care, and a lot less callous disregard. I’ve never understood why we don’t try harder, given the risk to reward ratio.

Given the season, please take the opportunity to give a little more love to your family. If the situation arises where you can share that love with someone else who needs it, please do so. A little can go a long way. I can assure you it won’t go unappreciated.

Originally published at BonesMoses.org. You can comment here or there.

Tags: , , ,

November 5th, 2014

05:20 pm: On PostgreSQL View Dependencies

As many seasoned DBAs might know, there’s one area that PostgreSQL still manages to be highly aggravating. By this, I mean the role views have in mucking up PostgreSQL dependencies. The part that annoys me personally, is that it doesn’t have to be this way.

Take, for example, what happens if you try to modify a VARCHAR column so that the column length is higher. We’re not changing the type, or dropping the column, or anything overly complicated. Yet we’re faced with this message:

ERROR:  cannot alter type of a column used by a view or rule
DETAIL:  rule _RETURN on view v_change_me depends on column "too_short"

Though PostgreSQL tells us which view and column prompted this error, that’s the last favor it provides. The only current way to fix this error is to drop the view, alter the column, then recreate the view. In a production 24/7 environment, this is extremely problematic. The system I work with handles over two-billion queries per day; there’s no way I’m dropping a view that the platform depends on, even in a transaction.

This problem is compounded when views depend on other views. The error doesn’t say so, but I defined another view named v_change_me_too that depends on v_change_me, yet I would never know it by the output PostgreSQL generated. Large production systems can have dozens, or even hundreds of views that depend on complex hierarchies of tables and other views. Yet there’s no built-in way to identify these views, let alone modify them safely.

If you want to follow along, this is the code I used to build my test case:

CREATE TABLE change_me ( too_short VARCHAR(30) );
CREATE VIEW v_change_me AS SELECT * FROM change_me;
CREATE VIEW v_change_me_too AS SELECT * FROM v_change_me;

And here’s the statement I used to try and make the column bigger:

ALTER TABLE change_me ALTER too_short TYPE VARCHAR(50);

It turns out we can solve this for some cases, though it takes a very convoluted path. The first thing we need to do is identify all of the views in the dependency chain. To do this, we need a recursive query. Here’s one that should find all the views in our sample chain, starting with the table itself:

    SELECT c.oid::REGCLASS AS view_name
      FROM pg_class c
     WHERE c.relname = 'change_me'
    SELECT DISTINCT r.ev_class::REGCLASS AS view_name
      FROM pg_depend d
      JOIN pg_rewrite r ON (r.oid = d.objid)
      JOIN vlist ON (vlist.view_name = d.refobjid)
     WHERE d.refobjsubid != 0
SELECT * FROM vlist;

If we execute that query, both v_change_me and v_change_me_too will show up in the results. Keep in mind that in actual production systems, this list can be much longer. For systems that can survive downtime, this list can be passed to pg_dump to obtain all of the view definitions. That will allow a DBA to drop the views, modify the table, then accurately recreate them.

For simple cases where we’re just extending an existing column, we can take advantage of the fact the pg_attribute catalog table allows direct manipulation. In PostgreSQL, TEXT-type columns have a length 4-bytes longer than the column limit. So we simply reuse the recursive query and extend that length:

    SELECT c.oid::REGCLASS AS view_name
      FROM pg_class c
     WHERE c.relname = 'change_me'
    SELECT DISTINCT r.ev_class::REGCLASS AS view_name
      FROM pg_depend d
      JOIN pg_rewrite r ON (r.oid = d.objid)
      JOIN vlist ON (vlist.view_name = d.refobjid)
     WHERE d.refobjsubid != 0
UPDATE pg_attribute a
   SET a.atttypmod = 50 + 4
  FROM vlist
 WHERE a.attrelid = vlist.view_name
   AND a.attname = 'too_short';

Now, this isn’t exactly a perfect solution. If views alias the column name, things get a lot more complicated. We have to modify the recursive query to return both the view name, and the column alias. Unfortunately the pg_depend view always sets the objsubid column to 0 for views. The objsubid column is used to determine which which column corresponds to the aliased column.

Without having this value, it becomes impossible to know what to modify in pg_attribute for the views. In effect, instead of being a doubly-linked list, pg_depend is a singly-linked list we can only follow backwards. So we can discover what the aliases depend on, but not what the aliases are. I can’t really think of any reason this would be set for tables, but not for views.

This means, of course, that large production systems will still need to revert to the DROP -> ALTER -> CREATE route for column changes to dependent views. But why? PostgreSQL knows the entire dependency chain. Why is it impossible to modify these in an atomic transaction context? If I have one hundred views on a table, why do I have to drop all of them before modifying the table? And, again, the type of modification in this example is extremely trivial; we’re not going from a TEXT to an INT, or anything that would require drastically altering the view logic.

For highly available databases, this makes it extremely difficult to use PostgreSQL without some type of short outage. Column modifications, while not common, are a necessary evil. Since it would be silly to recommend never using views, we have to live with downtime imposed by the database software. Now that PostgreSQL is becoming popular in enterprise settings, issues like this are gaining more visibility.

Hopefully this is one of those easy fixes they can patch into 9.5 or 9.6. If not, I can see it hampering adoption.

Originally published at BonesMoses.org. You can comment here or there.

Tags: , , , ,

October 15th, 2014

03:56 pm: Why I’m Somewhat Worried About Ebola

Before I really get going with this post, I want to say I’m not panicked, and I suggest you stay the same. Meanwhile, it’s pretty clear the currently cavalier attitude toward Ebola needs to change. And of course, it all boils down to humans being the fallible creatures they are.

How Ebola Works

There’s good information on How Ebola Works, and how it kills you, but I’ll summarize. Ebola is a Biosafety Level 4 contagion, meaning proper attire when interacting with infected is a fully sealed safety suit with respirator, which should be decontaminated before and after exposure.

These kinds of precautions are necessary because Ebola is a hemorrhagic fever that causes multiple organ failure within days of exposure. How does that happen? Ebola is capable of replicating without the immune system taking immediate notice, because it attacks the dendritic cells of the immune system itself. Since these cells are how the immune system recognizes new invaders, there’s no defense while Ebola replicates. During the infection, it enters cells and makes them make more Ebola, which in turn causes them to explode. Eventually enough of this happens that the immune system actually does something once it notices the damage.

Unfortunately that something is a cytokine storm. In effect, the immune system freaks out and disgorges all of its killing might, severely damaging blood vessels in the process. This internal bleeding, in turn, causes blood pressure to drop. Combined with tissue damaged by Ebola, this leads to organ failure and eventually death. The mortality rate is quoted as anywhere between 25% to 95%, but it’s quite a bit more potent than the flu.

During all of this, you can’t have painkillers to ease the agony of your dying organs due to the likelihood your liver is among the casualties. It is a horrible, painful way to die that I wouldn’t wish on anyone.

How Does it Spread?

Some comparisons have been made that suggest Ebola is about as communicative as Hepatitis C. The CDC suggests that bodily fluids become a vector when the patient starts showing symptoms, which may take up to three weeks. With an incubation period that long, this allows travelers to reach quite diverse destinations before symptoms appear and spreading becomes likely.

Usually it’s easy to avoid bodily fluids like blood, mucus, semen, or diarrhea. But the problem with Ebola is that sweat is also a vector. Even during a cool day, the body produces sweat, and that sweat can get smeared on things like doorknobs.

Why it Bothers Me

The patient who recently died in Dallas has already infected his third health care worker. I’m pretty sure Hepatitis C doesn’t spread so easily, or every nurse in the country would have it. What’s worse, the nurse traveled on a plane a day before being diagnosed, potentially infecting anyone she encountered at both the departure and destination airports, as well as anyone in the plane. Yes, that includes the flight attendants.

And that is the real problem: people.

People generally don’t wash their hands before eating or itching their eyes. People don’t cough or sneeze into their elbows. People work sick for fear of loosing their jobs, or falling behind, thereby spreading diseases to the entire workplace. People reuse gloves. People travel when they’re not supposed to. People get scared and make mistakes. People cut budgets so there aren’t sufficient resources to handle outbreaks. People send patients home with incorrect diagnoses.

People suck.

We can claim “it will never happen here,” or “you’d have to roll around in Ebola diarrhea to catch it,” or “our infrastructure will prevent spreading,” but that’s all wrong. All it takes is one weak link: one lazy person who didn’t fill in a checkbox on a medical form; one person who neglected to change gloves between patients; one person who thinks the rules don’t apply to them; one person in denial about how sick they are; one person who works in fast food and can’t afford a sick day. Or in the case of Dallas, a hilariously incompetent string of mishaps that led to at least three nurses being infected, people who presumably have better access to sterilization and proper handling of contaminated material than the rest of us.

And all of this is happening right as we start to enter flu season. The symptoms of Ebola are very similar to the flu, which means misdiagnoses will become problematic. Being sloppy with the flu is a nuisance, but with Ebola, it’s deadly. We need to stop fucking around and get serious, or we’ll end up like Liberia and Sierra Leone.

We can mock those countries all we want for shoddy infrastructure and lack of education, but are we really any better? People are fallible, and whether they’re in the US or Africa, we need to account for Murphy’s Law and Finagle’s Corollary. We can call Dallas a fluke, but it’s not. Shit happens, and the sooner we accept that, the sooner we can actually address Ebola before it becomes a real problem.

Hopefully, we still can.

Originally published at BonesMoses.org. You can comment here or there.

Tags: ,

August 5th, 2014

02:35 pm: On PostgreSQL Logging Verbosity

Recently I stumbled across a question on Reddit regarding the performance impact of using pgBadger on an active database server. The only real answer to this question is: do not use pgBadger. Before anyone asks—no, you shouldn’t use pgFouine either. This is not an indictment on the quality of either project, but a statement of their obsolescence in the face of recent PostgreSQL features.

One of the recommended postgresql.conf changes for both of these tools is to set log_min_duration_statement to 0. There are a few other changes they require to put the log entries in the correct format, but we can worry about those later. For now, let’s focus on a little innocent math.

One of the PostgreSQL servers I work with, processes almost two billion queries per day. Let’s imagine every such query is very simple, even though this is definitely not the case. Consider this an example query:

SELECT col1, col2 FROM my_table WHERE id=?

Assuming the query is paramterized, and the number is from one to a million, our average query length is 47 characters. Let’s just say it’s 50 to keep things easy. If we multiply that by two billion, that’s 100-billion bytes of logged SQL. Seen another way, that’s 93GB of logs per day, or about 1MB of log data per second.

In practice, such a short query will not constitute the bulk of a PostgreSQL server’s workload. In fact, if even a simple ORM is involved, all queries are likely to be far more verbose. Java’s hibernate in particular is especially prone to overly gratuitous aliases prepended to all result columns. This is what our query would look like after Hibernate was done with it:

SELECT opsmytable1_.col1, opsmytable1_.col1
  FROM my_table opsmytable1_
 WHERE opsmytable1_.id=?

If we ignore the whitespace I added for readability, and use values from one to a million, the average query length becomes 99. Remember, this is ignoring all useful data PostgreSQL would also be logging! There are also a number of other problems with many of my operating assumptions. It’s very unlikely that query traffic will be consistent, nor will the queries themselves be so short. In addition, I didn’t account for the length of the log prefix that should contain relevant metadata about the query and its duration.

Once on a boring day, I enabled all query logging just to see how verbose our logs became. On that fateful day, I set log_min_duration_statement to 0 for approximately ten seconds, and the result was 140MB worth of log files. Thus was my curiosity sated, and my soul filled with abject horror. Faced with such untenable output, how can we do log analysis? There’s no way pgBadger can process 100GB of logs in a timely manner. I tried using it a while ago, and even that ten seconds of log output required over a minute of processing.

It turns out PostgreSQL has had an answer to this for a while, but it wasn’t until the release of 9.2 that the feature became mature enough to use regularly. The pg_stat_statements extension maintains a system catalog table that tracks query performance data in realtime. Constants and variables are replaced to generalize the results, and it exposes information such as the number of executions, the total average run time of all executions, the number of rows matched, and so on. This is more than any log processing utility can do given the most verbose settings available.

I could spend hours churning through log files, or I can execute a query like this:

SELECT calls, total_time/calls AS avg_time, query
  FROM pg_stat_statements
 LIMIT 10;

That query just returned the ten slowest queries in the database. I could easily modify this query to find the most frequently executed queries, and thus improve our caching layer to include that data. This module can be directly responsible for platform improvements if used properly, and the amount of overhead is minimal.

In addition, the log settings are still available in conjunction with pg_stat_statements. I normally recommend setting log_min_duration_statement to a value that’s high enough to remove log noise, but low enough that it exposes problems early. I have ours set to 1000 so any query that runs longer than one second is exposed. Even on a system as active as ours, this produces about 5MB of log entries per day. This is a much more reasonable amount of data for log analysis, spot-checking, or finding abnormal system behavior.

All of this said, we could just as easily watch the database cluster and set log_min_duration_statement to a nonzero amount of milliseconds. For most systems, even 20 milliseconds would be enough to prevent log output from saturating our disk write queue. However, the pg_stat_statements extension automatically takes care of performance statistics without any post-processing or corresponding increase in log verbosity, so why add pgBadger to the stack at all?

There may be a compelling argument I’m missing, but for now I suggest using pg_stat_statements without PostgreSQL-focused log post-processing. Ops tools like Graylog or logstash are specifically designed to parse logs for monitoring significant events, and keeping the signal to noise ratio high is better for these tools.

Save logs for errors, warnings, and notices; PostgreSQL is great at keeping track of its own performance.

Originally published at BonesMoses.org. You can comment here or there.

Tags: , , , , , ,

July 29th, 2014

03:23 pm: Finally Done With High Availability

Well, my publisher recently informed me that the book I’ve long been slaving over for almost a year, is finally finished. I must admit that PostgreSQL 9 High Availability Cookbook is somewhat awkward as a title, but that doesn’t detract from the contents. I’d like to discuss primarily why I wrote it.

When Packt first approached me in October of 2013, I was skeptical. I have to admit that I’m not a huge fan of the “cookbook” style they’ve been pushing lately. Yet, the more I thought about it, the more I realized it was something the community needed. I’ve worked almost exclusively with PostgreSQL since at late 2005 with databases big and small. It was always the big ones that presented difficulties.

Back then, disaster recovery nodes were warm standby through continuous recovery at best, and pg_basebackup didn’t exist. Nor did pg_upgrade, actually. Everyone had their own favorite backup script, and major upgrades required dumping the entire database and importing it in the new version. To work with PostgreSQL then required a much deeper understanding than is necessary now. Those days forced me to really understand how PostgreSQL functions, which caveats to acknowledge, and which needed redress.

One of those caveats that still called out to me, was one of adoption. With a lot of the rough edges removed in recent releases of PostgreSQL, came increased usage in small and large businesses alike. I fully expected PostgreSQL to be used in a relatively small customer acquisition firm, for instance, but then I started seeing it in heavy-duty financial platforms. Corporate deployments of PostgreSQL require various levels of high availability, from redundant hardware, all the way to WAL stream management and automated failover systems.

When I started working with OptionsHouse in 2010, their platform handled 8,000 database transactions per second. Over the years, that has increased to around 17k, and I’ve seen spikes over 20k. At these levels, standard storage solutions break down, and even failover systems are disruptive. Any outage must be as short as possible, and be instantly available with little to no dependency on cache warming. Our backup system had to run on the warm standby or risk slowing down our primary database. Little by little, I broke the database cluster into assigned roles to stave off the total destruction I felt was immanent.

I was mostly scared of the size of the installation and its amount of activity. Basic calculations told me the database handled over a billion queries per day, at such a rate that even one minute of downtime could potentially cost us tens of thousands in commissions. But I had no playbook. There was nothing I could use as a guide so that I knew what to look for when things went wrong, or how I could build a stable stack that generally took care of itself. It was overwhelming.

This book, as overly verbose as the title might be, is my contribution to all of the DBAs out there that might have to administer a database that demands high availability. It’s as in-depth as I could get without diverging too much from the cookbook style, and there are plenty of links for those who want to learn beyond the scope of its content. The core however, is there. Anyone with a good understanding of Linux could pick it up and weave a highly available cluster of PostgreSQL systems without worrying, or having to build too many of their own tools.

If I’ve helped even one DBA with this high availability book, I’ll consider my mission accomplished. It’s the culmination of years of experimentation, research, and performance testing. I owe it to the PostgreSQL community—which has helped me out of many jams—to share my experience how I can.

Thanks, everyone!

Originally published at BonesMoses.org. You can comment here or there.

Tags: , , , , , ,

July 25th, 2014

12:12 pm: Friends Don’t Let Friends Use Loops

Programming is fun. I love programming! Ever since I changed my career from programming to database work, I’ve still occasionally dabbled in my former craft. As such, I believe I can say this with a fair amount of accuracy: programmers don’t understand databases. This isn’t something small, either; there’s a fundamental misunderstanding at play. Unless the coder happens to work primarily with graphics, bulk set-based transformations are not something they’ll generally work with.

For instance, if tasked with inserting ten thousand records into a database table, a programmer might simply open the data source and insert them one by one. Consider this basic (non-normalized) table with a couple basic indexes:

CREATE TABLE sensor_log (
  sensor_log_id    SERIAL PRIMARY KEY,
  location         VARCHAR NOT NULL,
  reading          BIGINT NOT NULL,
  reading_date     TIMESTAMP NOT NULL

CREATE INDEX idx_sensor_log_location ON sensor_log (location);
CREATE INDEX idx_sensor_log_date ON sensor_log (reading_date);

Now suppose we have a file with ten thousand lines of something like this:

38c-1401,293857,2014-07-25 10:18:38-05:00
69a-8921,209574,2014-07-25 10:18:25-05:00
9e5-0942,690134,2014-07-25 10:18:16-05:00

To load this data, our coder chooses Python and whips up an insert script. Let’s even give the programmer the benefit of the doubt, and say they know that prepared queries are faster due to less overhead. I see scripts like this all the time, written in languages from Java to Erlang. This one is no different:

import psycopg2

db_conn = psycopg2.connect(database = 'postgres', user = 'postgres')
cur = db_conn.cursor()

  """PREPARE log_add AS
     INSERT INTO sensor_log (location, reading, reading_date)
     VALUES ($1, $2, $3);"""

file_input = open('/tmp/input.csv', 'r')
for line in file_input:
    cur.execute("EXECUTE log_add(%s, %s, %s)", line.strip().split(','))

cur.execute("DEALLOCATE log_add");


It’s unlikely we have the /tmp/input.csv file itself, but we can generate one. Suppose we have 100 locations each with 100 sensors. We could produce a fake input file with this SQL:

    SELECT substring(md5((a.id % 100)::TEXT), 1, 3) || '-' ||
           to_char(a.id % 100, 'FM0999') AS location,
           (a.id * random() * 1000)::INT AS reading,
           now() - (a.id % 60 || 's')::INTERVAL AS reading_date
      FROM generate_series(1, 10000) a (id)
) TO '/tmp/input.csv' WITH CSV;

Whew! That was a lot of work. Now, let’s see what happens when we time the inserts on an empty import table:

real    0m1.162s
user    0m0.168s
sys     0m0.122s

Well, a little over one second isn’t that bad. But suppose we rewrote the python script a bit. Bear with me; I’m going to be silly and use the python script as a simple pass-through. This should simulate a process that applies transformations and outputs another file for direct database import. Here we go:

import psycopg2

file_input = open('/tmp/input.csv', 'r')
processed = open('/tmp/processed.csv', 'w+')

for line in file_input:
    parts = line.strip().split(',')
    processed.write(','.join(parts) + '\n')


db_conn = psycopg2.connect(database = 'postgres', user = 'postgres')
cur = db_conn.cursor()
    processed, 'sensor_log', ',',
    columns = ('location', 'reading', 'reading_date')



Now let’s look at the timings involved again:

real    0m0.365s
user    0m0.056s
sys     0m0.004s

That’s about three times faster! Considering how simple this example is, that’s actually pretty drastic. We don’t have many indexes, the table has few columns, and the number of rows is relatively small. The situation gets far worse as all of those things increase.

It’s also not the end of our story. What happens if we enable autocommit, so that each insert gets its own transaction? Some ORMs might do this, or a naive developer might try generating a single script full of insert statements, and not know much about transactions. Let’s see:

real    1m31.794s
user    0m0.233s
sys     0m0.172s

Oh. Ouch. What normally takes around a third of a second can balloon all the way out to a minute and a half. This is one of the reasons I strongly advocate educating developers on proper data import techniques. It’s one thing for a job to be three to five times slower due to inefficient design. It’s quite another to be nearly 250 times slower simply because a programmer believes producing the output file was fast, so logically, inserting it should be similarly speedy. Both scenarios can be avoided by educating anyone involved with data manipulation.

This doesn’t just apply to new hires. Keeping everyone up to date on new techniques is equally important, as are refresher courses. I care about my database, so when possible, I try to over-share as much information as I can manage. I even wrote and presented several talks which I periodically give to our application developers to encourage better database use. Our company Wiki is similarly full of information, which I also present on occasion, if only because reading technical manuals can be quite boring.

If my database is being abused, it’s my job as a DBA to try and alleviate the situation any way I can. Sometimes, that means telling people what they’re doing wrong, and how they can fix it. I certainly didn’t know all of this ten years ago when I was primarily a coder. But I would have appreciated being pointed in the right direction by someone with more experience in the field.

Your database and users deserve it.

Originally published at BonesMoses.org. You can comment here or there.

Tags: , , , , , , ,

June 11th, 2014

04:14 pm: Foreign Tables are not as Useful as I Hoped

When I heard about foreign tables using the new postgres_fdw foreign data wrapper in PostgreSQL 9.3, I was pretty excited. We hadn’t upgraded to 9.3 so I waited until we did before I did any serious testing. Having done more experimentation with it, I have to say I’m somewhat disappointed. Why? Because of how authentication was implemented.

I’m going to get this out of the way now: The postgres_fdw foreign data wrapper only works with hard-coded plain-text passwords, forever the bane of security-conscious IT teams everywhere. These passwords aren’t even obfuscated or encrypted locally. The only implemented security is that the pg_user_mapping table is limited to superuser access to actually see the raw passwords. Everyone else sees this:

postgres=> SELECT * FROM pg_user_mapping;
ERROR:  permission denied for relation pg_user_mapping

The presumption is that a database superuser can change everyone’s password anyway, so it probably doesn’t matter that it’s hardcoded and visible in this view. And the developers have a point; without the raw password, how can a server-launched client log into the remote database? Perhaps the real problem is that there’s no mechanism for forwarding authentication from database to database.

This is especially problematic when attempting to federate a large database cluster. If I have a dozen nodes that all have the same user credentials, I have to create mappings to every single user, for every single foreign table, on every single independent node, or revert to trust-based authentication.

This can be scripted to a certain extent, but to what end? If a user were to change their own password, this breaks every foreign data wrapper they could previously access. This user now has to give their password to the DBA to broadcast across all the nodes with modifications to the user mappings. In cases where LDAP, Kerberos, GSSAPI, peer, or other token forwarding authentication is in place, this might not even be possible or advised.

Oracle solved this problem by tying DBLINK tables to a specific user during creation time. An access to a certain table authenticates as that user in all cases. This means a DBA can set aside a specific user for foreign table access purposes, and use a password that’s easy to change across the cluster if necessary. Grants take care of who has access to these objects. Of course, since postgres_fdw is read/write, this would cause numerous permissions concerns.

So what are we left with? How can we actually use PostgreSQL foreign tables securely? At this point, I don’t believe it’s possible unless I’m missing something. And I’m extremely confused at how this feature got so far along without any real way to lock it down in the face of malleable passwords. Our systems have dozens of users who are forced by company policy to change their passwords every 90 days, thus none of these users can effectively access any foreign table I’d like to create.

And no, you can’t create a mapping and then grant access to it. In the face of multiple mapping grants, which one would PostgreSQL use? No, if there’s a way to solve this particular little snag, it won’t be that convenient. If anyone has ideas, or would like to go into length at how wrong I am, please do! Otherwise, I’m going to have to use internal users of my own design and materialized views to wrap the foreign tables; extremely large tables will need some other solution.

Originally published at BonesMoses.org. You can comment here or there.

Tags: , , , , ,

May 27th, 2014

09:02 am: PGCon 2014 Unconference: A Community

This May, I attended my first international conference: PGCon 2014. Though the schedule spanned from May 20th to May 23rd, I came primarily for the talks. Then there was the Unconference on the 24th. I’d never heard of such a thing, but it was billed as a good way to network and find out what community members want from PostgreSQL. After attending the Unconference, I must admit I’m exceptionally glad it exists; it’s something I believe every strong Open Source project needs.

Why do I say that, having only been to one of them? It’s actually fairly simple. Around 10AM Saturday, everyone piled into the large lecture hall and had a seat. There were significantly fewer attendees, but most of the core committers remained for the festivities. We were promised pizza and PostgreSQL, and that’s all anyone needed. Josh Berkus started the festivities by announcing the rules and polling for ideas. The final schedule was pretty interesting in itself, but I was more enamored by the process and the response it elicited.

I’m no stranger to the community, and the mailing lists are almost overwhelmingly active. But these conversations, nearly all of them, are focused on assistance and hacker background noise. The thing that stood out to me during the Unconference planning was its organic nature. It wasn’t just that we chose the event schedule democratically. It wasn’t the wide range of topics. It wasn’t even the fact core members were there to listen. It was the engagement.

These people were excited and enjoying talking about PostgreSQL in a way I’ve never witnessed, and I’ve spoken at Postgres Open twice so far. I’ve seen several talks, been on both sides of the podium, and no previous experience even comes close. We were all having fun brainstorming about PostgreSQL and its future. For one day, it wasn’t about pre-cooked presentations chosen via committee, but about what “the community” wanted to discuss.

When it came time for the talks themselves, this atmosphere persisted. We agreed and disagreed, we had long and concise arguments for and against ideas, clarified our positions, and generally converged toward a loose consensus. And it was glorious. I know we were recording the sessions, so if you have the time when the videos are available, I urge you to watch just one so you can see the beauty and flow of our conversations.

I feel so strongly about this that I believe PGCon needs to start a day earlier. One unfortunate element about the Unconference is that it happens on a Saturday, when everyone wants to leave and return to their families. Worse, there is a marathon on Sunday, meaning it is difficult or even impossible to secure a hotel room for the Saturday event. People tend to follow the path of least resistance, so if there is a problem getting lodging, they won’t go.

And that’s a shame. Having a core of interested and engaged community members not only improves the reputation of PostgreSQL, but its advocacy as well. If people feel they can contribute without having to code, they’ll be more likely to do so. If those contributions, no matter how small, are acknowledged, their progenitors will stick around. I believe this is the grass-roots effort that makes PostgreSQL the future of the database world, and whoever came up with the Unconference deserves every accolade I can exclaim.

We need more of this. PostgreSQL has one of the most open communities I’ve had the pleasure of participating in, and that kind of momentum can’t really be forced. I hope every future PostgreSQL conference in every country has one of these, so everyone with the motivation can take part in the process.

Finally, find your nearest PostgreSQL User Group, and join the community. You’ll be glad you did.

Originally published at BonesMoses.org. You can comment here or there.

Tags: , , , , , , ,

May 14th, 2014

04:40 pm: Foreign Keys are Not Free

PostgreSQL is a pretty good database, and I enjoy working with it. However, there is an implementation detail that not everyone knows about, which can drastically affect table performance. What is this mysterious feature? I am, of course, referring to foreign keys.

Foreign keys are normally a part of good database design, and for good reason. They inform about entity relationships, and they verify, enforce, and maintain those relationships. Yet all of this comes at a cost that might surprise you. In PostgreSQL, every foreign key is maintained with an invisible system-level trigger added to the source table in the reference. At least one trigger must go here, as operations that modify the source data must be checked that they do not violate the constraint.

This query is an easy way to see how many foreign keys are associated with every table in an entire PostgreSQL database:

SELECT t.oid::regclass::text AS table_name, count(1) AS total
  FROM pg_constraint c
  JOIN pg_class t ON (t.oid = c.confrelid)
 GROUP BY table_name

With this in mind, consider how much overhead each trigger incurs on the referenced table. We can actually calculate this overhead. Consider this function:

CREATE OR REPLACE FUNCTION fnc_check_fk_overhead(key_count INT)
  i INT;
  CREATE TABLE test_fk
    junk VARCHAR

  INSERT INTO test_fk
  SELECT generate_series(1, 100000), repeat(' ', 20);

  CLUSTER test_fk_pkey ON test_fk;

  FOR i IN 1..key_count LOOP
    EXECUTE 'CREATE TABLE test_fk_ref_' || i || 
            ' (test_fk_id BIGINT REFERENCES test_fk (id))';

  FOR i IN 1..100000 LOOP
    UPDATE test_fk SET junk = '                    '
     WHERE id = i;


  FOR i IN 1..key_count LOOP
    EXECUTE 'DROP TABLE test_fk_ref_' || i;


The function is designed to create a simple two-column table, fill it with 100,000 records, and test how long it takes to update every record. This is purely meant to simulate a high-transaction load caused by multiple clients. I know no sane developer would actually update so many records this way.

The only parameter this function accepts, is the amount of tables it should create that reference this source table. Every referring table is empty, and has only one column for the reference to be valid. After the foreign key tables are created, it performs those 100,000 updates, and we can measure the output with our favorite SQL tool. Here is a quick test with psql:

SELECT fnc_check_fk_overhead(0);
SELECT fnc_check_fk_overhead(5);
SELECT fnc_check_fk_overhead(10);
SELECT fnc_check_fk_overhead(15);
SELECT fnc_check_fk_overhead(20);

On our system, these timings were collected several times, and averaged 2961ms, 3805ms, 4606ms, 5089ms, and 5785ms after three runs each. As we can see, after merely five foreign keys, performance of our updates drops by 28.5%. By the time we have 20 foreign keys, the updates are 95% slower!

I don’t mention this to make you abandon foreign keys. However, if you are in charge of an extremely active OLTP system, you might consider removing any non-critical FK constraints. If the values are merely informative, or will not cause any integrity concerns, a foreign key is not required. Indeed, excessive foreign keys are actually detrimental to the database in a very tangible way.

I merely ask you keep this in mind when designing or revising schemas for yourself or developers you support.

Originally published at BonesMoses.org. You can comment here or there.

Tags: , , , ,

May 8th, 2014

12:24 pm: Trumping the PostgreSQL Query Planner

With the release of PostgreSQL 8.4, the community gained the ability to use CTE syntax. As such, this is a fairly old feature, yet it’s still misunderstood in a lot of ways. At the same time, the query planner has been advancing incrementally since that time. Most recently, PostgreSQL has gained the ability to perform index-only scans, making it possible to fetch results straight from the index, without confirming rows with the table data.

Unfortunately, this still isn’t enough. There are still quite a few areas where the PostgreSQL query planner is extremely naive, despite the advances we’ve seen recently. For instance, PostgreSQL still can’t do a basic loose index scan natively. It has to be tricked by using CTE syntax.

To demonstrate this further, imagine this relatively common scenario: an order processing system where clients can order products. What happens when we want to find the most recent order for all current customers? Boiled down to its minimum elements, this extremely simplified table will act as our order system.

CREATE TABLE test_order
  client_id   INT        NOT NULL,
  order_date  TIMESTAMP  NOT NULL,
  filler      TEXT       NOT NULL

Now we need data to test with. We can simulate a relatively old order processing system by taking the current date and subtracting 1,000 days. We can also bootstrap with 10,000 clients, and make the assumption that newer clients will be more active. This allows us to represent clients that have left our services as time goes on. So we start with this test data:

INSERT INTO test_order
SELECT s1.id,
       (CURRENT_DATE - INTERVAL '1000 days')::DATE 
           + generate_series(1, s1.id%1000),
       repeat(' ', 20)
  FROM generate_series(1, 10000) s1 (id);

The generate_series function is very handy for building fake data. We’re still not ready to use that data, however. Since we want to find the most recent order for all customers, we need an index that will combine the client_id and order_date columns in such a way that a single lookup will provide the value we want for any particular client. This index should do nicely:

CREATE INDEX idx_test_order_client_id_order_date
    ON test_order (client_id, order_date DESC);

Finally, we analyze to make sure the PostgreSQL engine has the most recent stats for our table. Just to make everything easily repeatable, we also set the default_statistics_target to a higher value than default as well.

SET default_statistics_target TO 500;
ANALYZE test_order;

Now we’ll start with the most obvious query. Here, we just use the client_id column and look for the max order_date for each:

SELECT client_id, max(order_date)
  FROM test_order
 GROUP BY client_id;

The query plan is fairly straight-forward, and will probably include a sequence scan. On the virtual server we’re testing with, the total runtime for us ended up looking like this:

Total runtime: 1117.408 ms

There is some variance, but the end result is just over one second per execution. We ran this query several times to ensure it was properly cached by PostgreSQL. Why didn’t the planner use the index we created? Let’s assume the planner doesn’t know what max does, and treats it like any other function. With that in mind, we can exploit a different type of syntax that should make the index much more usable. So let’s try DISTINCT ON with an explicit ORDER clause that matches the definition of our index:

EXPLAIN ANALYZE SELECT DISTINCT ON (client_id) client_id, order_date FROM test_order ORDER BY client_id, order_date DESC;

Well, this time our test system used an index-only scan, and produced the results somewhat faster. Our new runtime looks like this:

Total runtime: 923.300 ms

That’s almost 20% faster than the sequence scan. Depending on how much bigger the table is than the index, reading the index and producing these results can vary significantly. And while the query time improved, it’s still pretty bad. For systems with tens or hundreds of millions of orders, the performance of this query will continue to degrade along with the row count. We’re also not really using the index effectively.

Reading the index from top to bottom and pulling out the desired results is faster than reading the whole table. But why should we do that? Due to the way we built this index, the root node for each client should always represent the value we’re looking for. So why doesn’t the planner simply perform a shallow index scan along the root nodes? It doesn’t matter what the reason is, because we can force it to do so. This is going to be ugly, but this query will act just as we described:

  (SELECT client_id, order_date
    FROM test_order
   ORDER BY client_id, order_date DESC
   LIMIT 1)
  (SELECT (SELECT min(client_id)
             FROM test_order
            WHERE client_id > skip.client_id
          ) AS client_id,
          (SELECT max(order_date)
             FROM test_order
            WHERE client_id = (
                    SELECT min(client_id)
                      FROM test_order
                     WHERE client_id > skip.client_id
          ) AS order_date
    FROM skip
   WHERE skip.client_id IS NOT NULL)
  FROM skip;

The query plan for this is extremely convoluted, and we’re not even going to try to explain what it’s doing. But the final query execution time is hard to discount:

Total runtime: 181.501 ms

So what happened here? How can the abusive and ugly CTE above outwit the PostgreSQL query planner? We use the same principle as described in the PostgreSQL wiki for loose index scans. We start with the desired maximum order date for a single client_id, then recursively begin adding clients one by one until the index is exhausted. Due to limitations preventing us from using the recursive element in a sub-query, we have to use the SELECT clause to get the next client ID and the associated order date for that client.

This technique works universally for performing sparse index scans, and actually improves as cardinality (the number of unique values) decreases. As unlikely as that sounds, since we are only using the root nodes within the index tree, performance increases when there are less root nodes to check. This is the exact opposite to how indexes are normally used, so we can see why PostgreSQL doesn’t natively integrate this technique. Yet we would like to see it added eventually so query authors can use the first query example we wrote, instead of the excessively unintuitive version that actually produced good performance.

In any case, all PostgreSQL DBAs owe it to themselves and their clusters to learn CTEs. They provide a powerful override for the query planner, and helps solve the edge cases it doesn’t yet handle.

Originally published at BonesMoses.org. You can comment here or there.

Tags: , , , , ,
Powered by LiveJournal.com