py-postgresql¶
py-postgresql is a project dedicated to improving the Python client interfaces to PostgreSQL.
At its core, py-postgresql provides a PG-API, postgresql.api, and DB-API 2.0 interface for using a PostgreSQL database.
Contents¶
Administration¶
This chapter covers the administration of py-postgresql. This includes installation and other aspects of working with py-postgresql such as environment variables and configuration files.
Installation¶
py-postgresql uses Python’s distutils package to manage the build and
installation process of the package. The normal entry point for
this is the setup.py
script contained in the root project directory.
After extracting the archive and changing the into the project’s directory, installation is normally as simple as:
$ python3 ./setup.py install
However, if you need to install for use with a particular version of python, just use the path of the executable that should be used:
$ /usr/opt/bin/python3 ./setup.py install
Environment¶
These environment variables effect the operation of the package:
PGINSTALLATION The path to the pg_config
executable of the installation to use by default.
Driver¶
postgresql.driver provides a PG-API, postgresql.api, interface to a PostgreSQL server using PQ version 3.0 to facilitate communication. It makes use of the protocol’s extended features to provide binary datatype transmission and protocol level prepared statements for strongly typed parameters.
postgresql.driver currently supports PostgreSQL servers as far back as 8.0. Prior versions are not tested. While any version of PostgreSQL supporting version 3.0 of the PQ protocol should work, many features may not work due to absent functionality in the remote end.
For DB-API 2.0 users, the driver module is located at postgresql.driver.dbapi20. The DB-API 2.0 interface extends PG-API. All of the features discussed in this chapter are available on DB-API connections.
Warning
PostgreSQL versions 8.1 and earlier do not support standard conforming
strings. In order to avoid subjective escape methods on connections,
postgresql.driver.pq3 enables the standard_conforming_strings
setting
by default. Greater care must be taken when working versions that do not
support standard strings.
The majority of issues surrounding the interpolation of properly quoted literals can be easily avoided by using parameterized statements.
The following identifiers are regularly used as shorthands for significant interface elements:
db
- postgresql.api.Connection, a database connection. Connections
ps
- postgresql.api.Statement, a prepared statement. Prepared Statements
c
- postgresql.api.Cursor, a cursor; the results of a prepared statement. Cursors
C
- postgresql.api.Connector, a connector. Connectors
Establishing a Connection¶
There are many ways to establish a postgresql.api.Connection to a PostgreSQL server using postgresql.driver. This section discusses those, connection creation, interfaces.
postgresql.open¶
In the root package module, the open()
function is provided for accessing
databases using a locator string and optional connection keywords. The string
taken by postgresql.open is a URL whose components make up the client
parameters:
>>> db = postgresql.open("pq://localhost/postgres")
This will connect to the host, localhost
and to the database named
postgres
via the pq
protocol. open will inherit client parameters from
the environment, so the user name given to the server will come from
$PGUSER
, or if that is unset, the result of getpass.getuser–the username
of the user running the process. The user’s “pgpassfile” will even be
referenced if no password is given:
>>> db = postgresql.open("pq://username:password@localhost/postgres")
In this case, the password is given, so ~/.pgpass
would never be
referenced. The user
client parameter is also given, username
, so
$PGUSER
or getpass.getuser will not be given to the server.
Settings can also be provided by the query portion of the URL:
>>> db = postgresql.open("pq://user@localhost/postgres?search_path=public&timezone=mst")
The above syntax ultimately passes the query as settings(see the description of
the settings
keyword in Connection Keywords). Driver parameters require a
distinction. This distinction is made when the setting’s name is wrapped in
square-brackets, ‘[‘ and ‘]’:
>>> db = postgresql.open("pq://user@localhost/postgres?[sslmode]=require&[connect_timeout]=5")
sslmode
and connect_timeout
are driver parameters. These are never sent
to the server, but if they were not in square-brackets, they would be, and the
driver would never identify them as driver parameters.
The general structure of a PQ-locator is:
protocol://user:password@host:port/database?[driver_setting]=value&server_setting=value
Optionally, connection keyword arguments can be used to override anything given in the locator:
>>> db = postgresql.open("pq://user:secret@host", password = "thE_real_sekrat")
Or, if the locator is not desired, individual keywords can be used exclusively:
>>> db = postgresql.open(user = 'user', host = 'localhost', port = 6543)
In fact, all arguments to postgresql.open are optional as all arguments are
keywords; iri
is merely the first keyword argument taken by
postgresql.open. If the environment has all the necessary parameters for a
successful connection, there is no need to pass anything to open:
>>> db = postgresql.open()
For a complete list of keywords that postgresql.open can accept, see
Connection Keywords.
For more information about the environment variables, see PostgreSQL Environment Variables.
For more information about the pgpassfile
, see PostgreSQL Password File.
postgresql.driver.connect¶
postgresql.open is a high-level interface to connection creation. It provides password resolution services and client parameter inheritance. For some applications, this is undesirable as such implicit inheritance may lead to failures due to unanticipated parameters being used. For those applications, use of postgresql.open is not recommended. Rather, postgresql.driver.connect should be used when explicit parameterization is desired by an application:
>>> import postgresql.driver as pg_driver
>>> db = pg_driver.connect(
... user = 'usename',
... password = 'secret',
... host = 'localhost',
... port = 5432
... )
This will create a connection to the server listening on port 5432
on the host localhost
as the user usename
with the password secret
.
Note
connect will not inherit parameters from the environment as libpq-based drivers do.
See Connection Keywords for a full list of acceptable keyword parameters and their meaning.
Connectors¶
Connectors are the supporting objects used to instantiate a connection. They exist for the purpose of providing connections with the necessary abstractions for facilitating the client’s communication with the server, and to act as a container for the client parameters. The latter purpose is of primary interest to this section.
Each connection object is associated with its connector by the connector
attribute on the connection. This provides the user with access to the
parameters used to establish the connection in the first place, and the means to
create another connection to the same server. The attributes on the connector
should not be altered. If parameter changes are needed, a new connector should
be created.
The attributes available on a connector are consistent with the names of the connection parameters described in Connection Keywords, so that list can be used as a reference to identify the information available on the connector.
Connectors fit into the category of “connection creation interfaces”, so connector instantiation normally takes the same parameters that the postgresql.driver.connect function takes.
Note
Connector implementations are specific to the transport, so keyword arguments
like host
and port
aren’t supported by the Unix
connector.
The driver, postgresql.driver.default provides a set of connectors for making a connection:
postgresql.driver.default.host(...)
- Provides a
getaddrinfo()
abstraction for establishing a connection.postgresql.driver.default.ip4(...)
- Connect to a single IPv4 addressed host.
postgresql.driver.default.ip6(...)
- Connect to a single IPv6 addressed host.
postgresql.driver.default.unix(...)
- Connect to a single unix domain socket. Requires the
unix
keyword which must be an absolute path to the unix domain socket to connect to.
host
is the usual connector used to establish a connection:
>>> C = postgresql.driver.default.host(
... user = 'auser',
... host = 'foo.com',
... port = 5432)
>>> # create
>>> db = C()
>>> # establish
>>> db.connect()
If a constant internet address is used, ip4
or ip6
can be used:
>>> C = postgresql.driver.default.ip4(user='auser', host='127.0.0.1', port=5432)
>>> db = C()
>>> db.connect()
Additionally, db.connect()
on db.__enter__()
for with-statement support:
>>> with C() as db:
... ...
Connectors are constant. They have no knowledge of PostgreSQL service files, environment variables or LDAP services, so changes made to those facilities will not be reflected in a connector’s configuration. If the latest information from any of these sources is needed, a new connector needs to be created as the credentials have changed.
Note
host
connectors use getaddrinfo()
, so if DNS changes are made,
new connections will use the latest information.
Connection Keywords¶
The following is a list of keywords accepted by connection creation interfaces:
user
- The user to connect as.
password
- The user’s password.
database
- The name of the database to connect to. (PostgreSQL defaults it to user)
host
- The hostname or IP address to connect to.
port
- The port on the host to connect to.
unix
- The unix domain socket to connect to. Exclusive with
host
andport
. Expects a string containing the absolute path to the unix domain socket to connect to.settings
- A dictionary or key-value pair sequence stating the parameters to give to the database. These settings are included in the startup packet, and should be used carefully as when an invalid setting is given, it will cause the connection to fail.
connect_timeout
- Amount of time to wait for a connection to be made. (in seconds)
server_encoding
- Hint given to the driver to properly encode password data and some information in the startup packet. This should only be used in cases where connections cannot be made due to authentication failures that occur while using known-correct credentials.
sslmode
'disable'
- Don’t allow SSL connections.
'allow'
- Try without SSL first, but if that doesn’t work, try with.
'prefer'
- Try SSL first, then without.
'require'
- Require an SSL connection.
sslcrtfile
- Certificate file path given to ssl.wrap_socket.
sslkeyfile
- Key file path given to ssl.wrap_socket.
sslrootcrtfile
- Root certificate file path given to ssl.wrap_socket
sslrootcrlfile
- Revocation list file path. [Currently not checked.]
Connections¶
postgresql.open and postgresql.driver.connect provide the means to establish a connection. Connections provide a postgresql.api.Database interface to a PostgreSQL server; specifically, a postgresql.api.Connection.
Connections are one-time objects. Once, it is closed or lost, it can longer be used to interact with the database provided by the server. If further use of the server is desired, a new connection must be established.
Note
Cannot connect failures, exceptions raised on connect()
, are also terminal.
In cases where operations are performed on a closed connection, a postgresql.exceptions.ConnectionDoesNotExistError will be raised.
Database Interface Points¶
After a connection is established:
>>> import postgresql
>>> db = postgresql.open(...)
The methods and properties on the connection object are ready for use:
Connection.prepare(sql_statement_string)
- Create a postgresql.api.Statement object for querying the database. This provides an “SQL statement template” that can be executed multiple times. See Prepared Statements for more information.
Connection.proc(procedure_id)
- Create a postgresql.api.StoredProcedure object referring to a stored procedure on the database. The returned object will provide a collections.abc.Callable interface to the stored procedure on the server. See Stored Procedures for more information.
Connection.statement_from_id(statement_id)
- Create a postgresql.api.Statement object from an existing statement identifier. This is used in cases where the statement was prepared on the server. See Prepared Statements for more information.
Connection.cursor_from_id(cursor_id)
- Create a postgresql.api.Cursor object from an existing cursor identifier. This is used in cases where the cursor was declared on the server. See Cursors for more information.
Connection.do(language, source)
- Execute a DO statement on the server using the specified language. DO statements are available on PostgreSQL 9.0 and greater. Executing this method on servers that do not support DO statements will likely cause a SyntaxError.
Connection.execute(sql_statements_string)
- Run a block of SQL on the server. This method returns None unless an error occurs. If errors occur, the processing of the statements will stop and the error will be raised.
Connection.xact(isolation = None, mode = None)
- The postgresql.api.Transaction constructor for creating transactions. This method creates a transaction reference. The transaction will not be started until it’s instructed to do so. See Transactions for more information.
Connection.settings
- A property providing a collections.abc.MutableMapping interface to the database’s SQL settings. See Settings for more information.
Connection.clone()
- Create a new connection object based on the same factors that were used to create
db
. The new connection returned will already be connected.Connection.msghook(msg)
- By default, the msghook attribute does not exist. If set to a callable, any message that occurs during an operation of the database or an operation of a database derived object will be given to the callable. See the Database Messages section for more information.
Connection.listen(*channels)
- Start listening for asynchronous notifications in the specified channels. Sends a batch of
LISTEN
statements to the server.Connection.unlisten(*channels)
- Stop listening for asynchronous notifications in the specified channels. Sends a batch of
UNLISTEN
statements to the server.Connection.listening_channels()
- Return an iterator producing the channel names that are currently being listened to.
Connection.notify(*channels, **channel_and_payload)
NOTIFY the channels with the given payload. Sends a batch of
NOTIFY
statements to the server.Equivalent to issuing “NOTIFY <channel>” or “NOTIFY <channel>, <payload>” for each item in channels and channel_and_payload. All NOTIFYs issued will occur in the same transaction, regardless of auto-commit.
The items in channels can either be a string or a tuple. If a string, no payload is given, but if an item is a builtins.tuple, the second item in the pair will be given as the payload, and the first as the channel. channels offers a means to issue NOTIFYs in guaranteed order:
>>> db.notify('channel1', ('different_channel', 'payload'))In the above,
NOTIFY "channel1";
will be issued first, followed byNOTIFY "different_channel", 'payload';
.The items in channel_and_payload are all payloaded NOTIFYs where the keys are the channels and the values are the payloads. Order is undefined:
>>> db.notify(channel_name = 'payload_data')channels and channels_and_payload can be used together. In such cases all NOTIFY statements generated from channels_and_payload will follow those in channels.
Connection.iternotifies(timeout = None)
- Return an iterator to the NOTIFYs received on the connection. The iterator will yield notification triples consisting of
(channel, payload, pid)
. While iterating, the connection should not be used in other threads. The optional timeout can be used to enable “idle” events in which None objects will be yielded by the iterator. See Notification Management for details.
When a connection is established, certain pieces of information are collected from the backend. The following are the attributes set on the connection object after the connection is made:
Connection.version
- The version string of the server; the result of
SELECT version()
.Connection.version_info
- A
sys.version_info
form of theserver_version
setting. eg.(8, 1, 2, 'final', 0)
.Connection.security
- None if no security.
'ssl'
if SSL is enabled.Connection.backend_id
- The process-id of the backend process.
Connection.backend_start
- When backend was started.
datetime.datetime
instance.Connection.client_address
- The address of the client that the backend is communicating with.
Connection.client_port
- The port of the client that the backend is communicating with.
Connection.fileno()
- Method to get the file descriptor number of the connection’s socket. This method will return None if the socket object does not have a
fileno
. Under normal circumstances, it will return an int.
The backend_start
, client_address
, and client_port
are collected
from pg_stat_activity. If this information is unavailable, the attributes will
be None.
Prepared Statements¶
Prepared statements are the primary entry point for initiating an operation on the database. Prepared statement objects represent a request that will, likely, be sent to the database at some point in the future. A statement is a single SQL command.
The prepare
entry point on the connection provides the standard method for
creating a postgersql.api.Statement instance bound to the
connection(db
) from an SQL statement string:
>>> ps = db.prepare("SELECT 1")
>>> ps()
[(1,)]
Statement objects may also be created from a statement identifier using the
statement_from_id
method on the connection. When this method is used, the
statement must have already been prepared or an error will be raised.
>>> db.execute("PREPARE a_statement_id AS SELECT 1;")
>>> ps = db.statement_from_id('a_statement_id')
>>> ps()
[(1,)]
When a statement is executed, it binds any given parameters to a new cursor and the entire result-set is returned.
Statements created using prepare()
will leverage garbage collection in order
to automatically close statements that are no longer referenced. However,
statements created from pre-existing identifiers, statement_from_id
, must
be explicitly closed if the statement is to be discarded.
Statement objects are one-time objects. Once closed, they can no longer be used.
Statement Interface Points¶
Prepared statements can be executed just like functions:
>>> ps = db.prepare("SELECT 'hello, world!'")
>>> ps()
[('hello, world!',)]
The default execution method, __call__
, produces the entire result set. It
is the simplest form of statement execution. Statement objects can be executed in
different ways to accommodate for the larger results or random access(scrollable
cursors).
Prepared statement objects have a few execution methods:
Statement(*parameters)
- As shown before, statement objects can be invoked like a function to get the statement’s results.
Statement.rows(*parameters)
- Return a iterator to all the rows produced by the statement. This method will stream rows on demand, so it is ideal for situations where each individual row in a large result-set must be processed.
iter(Statement)
Convenience interface that executes the
rows()
method without arguments. This enables the following syntax:>>> for table_name, in db.prepare("SELECT table_name FROM information_schema.tables"): ... print(table_name)Statement.column(*parameters)
- Return a iterator to the first column produced by the statement. This method will stream values on demand, and should only be used with statements that have a single column; otherwise, bandwidth will ultimately be wasted as the other columns will be dropped. This execution method cannot be used with COPY statements.
Statement.first(*parameters)
For simple statements, cursor objects are unnecessary. Consider the data contained in
c
from above, ‘hello world!’. To get at this data directly from the__call__(...)
method, it looks something like:>>> ps = db.prepare("SELECT 'hello, world!'") >>> ps()[0][0] 'hello, world!'To simplify access to simple data, the
first
method will simply return the “first” of the result set:>>> ps.first() 'hello, world!'
- The first value.
- When the result set consists of a single column,
first()
will return that column in the first row.- The first row.
- When the result set consists of multiple columns,
first()
will return that first row.- The first, and only, row count.
When DML–for instance, an INSERT-statement–is executed,
first()
will return the row count returned by the statement as an integer.Note
DML that returns row data, RETURNING, will not return a row count.
The result set created by the statement determines what is actually returned. Naturally, a statement used with
first()
should be crafted with these rules in mind.Statement.chunks(*parameters)
- This access point is designed for situations where rows are being streamed out quickly. It is a method that returns a
collections.abc.Iterator
that produces sequences of rows. This is the most efficient way to get rows from the database. The rows in the sequences arebuiltins.tuple
objects.Statement.declare(*parameters)
- Create a scrollable cursor with hold. This returns a postgresql.api.Cursor ready for accessing random rows in the result-set. Applications that use the database to support paging can use this method to manage the view.
Statement.close()
- Close the statement inhibiting further use.
Statement.load_rows(collections.abc.Iterable(parameters))
- Given an iterable producing parameters, execute the statement for each iteration. Always returns None.
Statement.load_chunks(collections.abc.Iterable(collections.abc.Iterable(parameters)))
Given an iterable of iterables producing parameters, execute the statement for each parameter produced. However, send the all execution commands with the corresponding parameters of each chunk before reading any results. Always returns None. This access point is designed to be used in conjunction with
Statement.chunks()
for transferring rows from one connection to another with great efficiency:>>> dst.prepare(...).load_chunks(src.prepare(...).chunks())Statement.clone()
- Create a new statement object based on the same factors that were used to create
ps
.Statement.msghook(msg)
- By default, the msghook attribute does not exist. If set to a callable, any message that occurs during an operation of the statement or an operation of a statement derived object will be given to the callable. See the Database Messages section for more information.
In order to provide the appropriate type transformations, the driver must acquire metadata about the statement’s parameters and results. This data is published via the following properties on the statement object:
Statement.sql_parameter_types
- A sequence of SQL type names specifying the types of the parameters used in the statement.
Statement.sql_column_types
- A sequence of SQL type names specifying the types of the columns produced by the statement. None if the statement does not return row-data.
Statement.pg_parameter_types
- A sequence of PostgreSQL type Oid’s specifying the types of the parameters used in the statement.
Statement.pg_column_types
- A sequence of PostgreSQL type Oid’s specifying the types of the columns produced by the statement. None if the statement does not return row-data.
Statement.parameter_types
- A sequence of Python types that the statement expects.
Statement.column_types
- A sequence of Python types that the statement will produce.
Statement.column_names
- A sequence of str objects specifying the names of the columns produced by the statement. None if the statement does not return row-data.
The indexes of the parameter sequences correspond to the parameter’s
identifier, N+1: sql_parameter_types[0]
-> '$1'
.
>>> ps = db.prepare("SELECT $1::integer AS intname, $2::varchar AS chardata")
>>> ps.sql_parameter_types
('INTEGER','VARCHAR')
>>> ps.sql_column_types
('INTEGER','VARCHAR')
>>> ps.column_names
('intname','chardata')
>>> ps.column_types
(<class 'int'>, <class 'str'>)
Parameterized Statements¶
Statements can take parameters. Using statement parameters is the recommended
way to interrogate the database when variable information is needed to formulate
a complete request. In order to do this, the statement must be defined using
PostgreSQL’s positional parameter notation. $1
, $2
, $3
, etc:
>>> ps = db.prepare("SELECT $1")
>>> ps('hello, world!')[0][0]
'hello, world!'
PostgreSQL determines the type of the parameter based on the context of the parameter’s identifier:
>>> ps = db.prepare(
... "SELECT * FROM information_schema.tables WHERE table_name = $1 LIMIT $2"
... )
>>> ps("tables", 1)
[('postgres', 'information_schema', 'tables', 'VIEW', None, None, None, None, None, 'NO', 'NO', None)]
Parameter $1
in the above statement will take on the type of the
table_name
column and $2
will take on the type required by the LIMIT
clause(text and int8).
However, parameters can be forced to a specific type using explicit casts:
>>> ps = db.prepare("SELECT $1::integer")
>>> ps.first(-400)
-400
Parameters are typed. PostgreSQL servers provide the driver with the type information about a positional parameter, and the serialization routine will raise an exception if the given object is inappropriate. The Python types expected by the driver for a given SQL-or-PostgreSQL type are listed in Type Support.
This usage of types is not always convenient. Notably, the datetime module does not provide a friendly way for a user to express intervals, dates, or times. There is a likely inclination to forego these parameter type requirements.
In such cases, explicit casts can be made to work-around the type requirements:
>>> ps = db.prepare("SELECT $1::text::date")
>>> ps.first('yesterday')
datetime.date(2009, 3, 11)
The parameter, $1
, is given to the database as a string, which is then
promptly cast into a date. Of course, without the explicit cast as text, the
outcome would be different:
>>> ps = db.prepare("SELECT $1::date")
>>> ps.first('yesterday')
Traceback:
...
postgresql.exceptions.ParameterError
The function that processes the parameter expects a datetime.date object, and the given str object does not provide the necessary interfaces for the conversion, so the driver raises a postgresql.exceptions.ParameterError from the original conversion exception.
Inserting and DML¶
Loading data into the database is facilitated by prepared statements. In these examples, a table definition is necessary for a complete illustration:
>>> db.execute(
... """
... CREATE TABLE employee (
... employee_name text,
... employee_salary numeric,
... employee_dob date,
... employee_hire_date date
... );
... """
... )
Create an INSERT statement using prepare
:
>>> mkemp = db.prepare("INSERT INTO employee VALUES ($1, $2, $3, $4)")
And add “Mr. Johnson” to the table:
>>> import datetime
>>> r = mkemp(
... "John Johnson",
... "92000",
... datetime.date(1950, 12, 10),
... datetime.date(1998, 4, 23)
... )
>>> print(r[0])
INSERT
>>> print(r[1])
1
The execution of DML will return a tuple. This tuple contains the completed command name and the associated row count.
Using the call interface is fine for making a single insert, but when multiple
records need to be inserted, it’s not the most efficient means to load data. For
multiple records, the ps.load_rows([...])
provides an efficient way to load
large quantities of structured data:
>>> from datetime import date
>>> mkemp.load_rows([
... ("Jack Johnson", "85000", date(1962, 11, 23), date(1990, 3, 5)),
... ("Debra McGuffer", "52000", date(1973, 3, 4), date(2002, 1, 14)),
... ("Barbara Smith", "86000", date(1965, 2, 24), date(2005, 7, 19)),
... ])
While small, the above illustrates the ps.load_rows()
method taking an
iterable of tuples that provides parameters for the each execution of the
statement.
load_rows
is also used to support COPY ... FROM STDIN
statements:
>>> copy_emps_in = db.prepare("COPY employee FROM STDIN")
>>> copy_emps_in.load_rows([
... b'Emp Name1\t72000\t1970-2-01\t1980-10-22\n',
... b'Emp Name2\t62000\t1968-9-11\t1985-11-1\n',
... b'Emp Name3\t62000\t1968-9-11\t1985-11-1\n',
... ])
Copy data goes in as bytes and come out as bytes regardless of the type of COPY taking place. It is the user’s obligation to make sure the row-data is in the appropriate encoding.
COPY Statements¶
postgresql.driver transparently supports PostgreSQL’s COPY command. To the
user, COPY will act exactly like other statements that produce tuples; COPY
tuples, however, are bytes objects. The only distinction in usability is that
the COPY should be completed before other actions take place on the
connection–this is important when a COPY is invoked via rows()
or
chunks()
.
In situations where other actions are invoked during a COPY TO STDOUT
, the
entire result set of the COPY will be read. However, no error will be raised so
long as there is enough memory available, so it is very desirable to avoid
doing other actions on the connection while a COPY is active.
In situations where other actions are invoked during a COPY FROM STDIN
, a
COPY failure error will occur. The driver manages the connection state in such
a way that will purposefully cause the error as the COPY was inappropriately
interrupted. This not usually a problem as load_rows(...)
and
load_chunks(...)
methods must complete the COPY command before returning.
Copy data is always transferred using bytes
objects. Even in cases where the
COPY is not in BINARY
mode. Any needed encoding transformations must be
made the caller. This is done to avoid any unnecessary overhead by default:
>>> ps = db.prepare("COPY (SELECT i FROM generate_series(0, 99) AS g(i)) TO STDOUT")
>>> r = ps()
>>> len(r)
100
>>> r[0]
b'0\n'
>>> r[-1]
b'99\n'
Of course, invoking a statement that way will read the entire result-set into
memory, which is not usually desirable for COPY. Using the chunks(...)
iterator is the fastest way to move data:
>>> ci = ps.chunks()
>>> import sys
>>> for rowset in ps.chunks():
... sys.stdout.buffer.writelines(rowset)
...
<lots of data>
COPY FROM STDIN
commands are supported via
postgresql.api.Statement.load_rows. Each invocation to
load_rows
is a single invocation of COPY. load_rows
takes an iterable of
COPY lines to send to the server:
>>> db.execute("""
... CREATE TABLE sample_copy (
... sc_number int,
... sc_text text
... );
... """)
>>> copyin = db.prepare('COPY sample_copy FROM STDIN')
>>> copyin.load_rows([
... b'123\tone twenty three\n',
... b'350\ttree fitty\n',
... ])
For direct connection-to-connection COPY, use of load_chunks(...)
is
recommended as it will provide the most efficient transfer method:
>>> copyout = src.prepare('COPY atable TO STDOUT')
>>> copyin = dst.prepare('COPY atable FROM STDIN')
>>> copyin.load_chunks(copyout.chunks())
Specifically, each chunk of row data produced by chunks()
will be written in
full by load_chunks()
before getting another chunk to write.
Cursors¶
When a prepared statement is declared, ps.declare(...)
, a
postgresql.api.Cursor is created and returned for random access to the rows in
the result set. Direct use of cursors is primarily useful for applications that
need to implement paging. For situations that need to iterate over the result
set, the ps.rows(...)
or ps.chunks(...)
execution methods should be
used.
Cursors can also be created directly from cursor_id
’s using the
cursor_from_id
method on connection objects:
>>> db.execute('DECLARE the_cursor_id CURSOR WITH HOLD FOR SELECT 1;')
>>> c = db.cursor_from_id('the_cursor_id')
>>> c.read()
[(1,)]
>>> c.close()
Hint
If the cursor that needs to be opened is going to be treated as an iterator,
then a FETCH-statement should be prepared instead using cursor_from_id
.
Like statements created from an identifier, cursors created from an identifier must be explicitly closed in order to destroy the object on the server. Likewise, cursors created from statement invocations will be automatically released when they are no longer referenced.
Note
PG-API cursors are a direct interface to single result-set SQL cursors. This is in contrast with DB-API cursors, which have interfaces for dealing with multiple result-sets. There is no execute method on PG-API cursors.
Cursor Interface Points¶
For cursors that return row data, these interfaces are provided for accessing those results:
Cursor.read(quantity = None, direction = None)
This method name is borrowed from file objects, and are semantically similar. However, this being a cursor, rows are returned instead of bytes or characters. When the number of rows returned is less then the quantity requested, it means that the cursor has been exhausted in the configured direction. The
direction
argument can be either'FORWARD'
or True to FETCH FORWARD, or'BACKWARD'
or False to FETCH BACKWARD.Like,
seek()
, thedirection
property on the cursor object effects this method.Cursor.seek(position[, whence = 0])
- When the cursor is scrollable, this seek interface can be used to move the position of the cursor. See Scrollable Cursors for more information.
next(Cursor)
- This fetches the next row in the cursor object. Cursors support the iterator protocol. While equivalent to
cursor.read(1)[0]
, StopIteration is raised if the returned sequence is empty. (__next__()
)Cursor.close()
- For cursors opened using
cursor_from_id()
, this method must be called in order toCLOSE
the cursor. For cursors created by invoking a prepared statement, this is not necessary as the garbage collection interface will take the appropriate steps.Cursor.clone()
- Create a new cursor object based on the same factors that were used to create
c
.Cursor.msghook(msg)
- By default, the msghook attribute does not exist. If set to a callable, any message that occurs during an operation of the cursor will be given to the callable. See the Database Messages section for more information.
Cursors have some additional configuration properties that may be modified during the use of the cursor:
Cursor.direction
- A value of True, the default, will cause read to fetch forwards, whereas a value of False will cause it to fetch backwards.
'BACKWARD'
and'FORWARD'
can be used instead of False and True.
Cursors normally share metadata with the statements that create them, so it is usually unnecessary for referencing the cursor’s column descriptions directly. However, when a cursor is opened from an identifier, the cursor interface must collect the metadata itself. These attributes provide the metadata in absence of a statement object:
Cursor.sql_column_types
- A sequence of SQL type names specifying the types of the columns produced by the cursor. None if the cursor does not return row-data.
Cursor.pg_column_types
- A sequence of PostgreSQL type Oid’s specifying the types of the columns produced by the cursor. None if the cursor does not return row-data.
Cursor.column_types
- A sequence of Python types that the cursor will produce.
Cursor.column_names
- A sequence of str objects specifying the names of the columns produced by the cursor. None if the cursor does not return row-data.
Cursor.statement
- The statement that was executed that created the cursor. None if unknown–
db.cursor_from_id()
.
Scrollable Cursors¶
Scrollable cursors are supported for applications that need to implement paging.
When statements are invoked via the declare(...)
method, the returned cursor
is scrollable.
Note
Scrollable cursors never pre-fetch in order to provide guaranteed positioning.
The cursor interface supports scrolling using the seek
method. Like
read
, it is semantically similar to a file object’s seek()
.
seek
takes two arguments: position
and whence
:
position
- The position to scroll to. The meaning of this is determined by
whence
.whence
How to use the position: absolute, relative, or absolute from end:
- absolute:
'ABSOLUTE'
or0
(default)- seek to the absolute position in the cursor relative to the beginning of the cursor.
- relative:
'RELATIVE'
or1
- seek to the relative position. Negative
position
’s will cause a MOVE backwards, while positiveposition
’s will MOVE forwards.- from end:
'FROM_END'
or2
- seek to the end of the cursor and then MOVE backwards by the given
position
.
The whence
keyword argument allows for either numeric and textual
specifications.
Scrolling through employees:
>>> emps_by_age = db.prepare("""
... SELECT
... employee_name, employee_salary, employee_dob, employee_hire_date,
... EXTRACT(years FROM AGE(employee_dob)) AS age
... ORDER BY age ASC
... """)
>>> c = emps_by_age.declare()
>>> # seek to the end, ``2`` works as well.
>>> c.seek(0, 'FROM_END')
>>> # scroll back one, ``1`` works as well.
>>> c.seek(-1, 'RELATIVE')
>>> # and back to the beginning again
>>> c.seek(0)
Additionally, scrollable cursors support backward fetches by specifying the direction keyword argument:
>>> c.seek(0, 2)
>>> c.read(1, 'BACKWARD')
Cursor Direction¶
The direction
property on the cursor states the default direction for read
and seek operations. Normally, the direction is True, 'FORWARD'
. When the
property is set to 'BACKWARD'
or False, the read method will fetch
backward by default, and seek operations will be inverted to simulate a
reversely ordered cursor. The following example illustrates the effect:
>>> reverse_c = db.prepare('SELECT i FROM generate_series(99, 0, -1) AS g(i)').declare()
>>> c = db.prepare('SELECT i FROM generate_series(0, 99) AS g(i)').declare()
>>> reverse_c.direction = 'BACKWARD'
>>> reverse_c.seek(0)
>>> c.read() == reverse_c.read()
Furthermore, when the cursor is configured to read backwards, specifying
'BACKWARD'
for read’s direction
argument will ultimately cause a forward
fetch. This potentially confusing facet of direction configuration is
implemented in order to create an appropriate symmetry in functionality.
The cursors in the above example contain the same rows, but are ultimately in
reverse order. The backward direction property is designed so that the effect
of any read or seek operation on those cursors is the same:
>>> reverse_c.seek(50)
>>> c.seek(50)
>>> c.read(10) == reverse_c.read(10)
>>> c.read(10, 'BACKWARD') == reverse_c.read(10, 'BACKWARD')
And for relative seeks:
>>> c.seek(-10, 1)
>>> reverse_c.seek(-10, 1)
>>> c.read(10, 'BACKWARD') == reverse_c.read(10, 'BACKWARD')
Rows¶
Rows received from PostgreSQL are instantiated into postgresql.types.Row objects. Rows are both a sequence and a mapping. Items accessed with an int are seen as indexes and other objects are seen as keys:
>>> row = db.prepare("SELECT 't'::text AS col0, 2::int4 AS col1").first()
>>> row
('t', 2)
>>> row[0]
't'
>>> row["col0"]
't'
However, this extra functionality is not free. The cost of instantiating postgresql.types.Row objects is quite measurable, so the chunks() execution method will produce builtins.tuple objects for cases where performance is critical.
Note
Attributes aren’t used to provide access to values due to potential conflicts with existing method and property names.
Row Interface Points¶
Rows implement the collections.abc.Mapping and collections.abc.Sequence interfaces.
Row.keys()
- An iterable producing the column names. Order is not guaranteed. See the
column_names
property to get an ordered sequence.Row.values()
- Iterable to the values in the row.
Row.get(key_or_index[, default=None])
- Get the item in the row. If the key doesn’t exist or the index is out of range, return the default.
Row.items()
- Iterable of key-value pairs. Ordered by index.
iter(Row)
- Iterable to the values in index order.
value in Row
- Whether or not the value exists in the row. (__contains__)
Row[key_or_index]
- If
key_or_index
is an integer, return the value at that index. If the index is out of range, raise an IndexError. Otherwise, return the value associated with column name. If the given key,key_or_index
, does not exist, raise a KeyError.Row.index_from_key(key)
- Return the index associated with the given key.
Row.key_from_index(index)
- Return the key associated with the given index.
Row.transform(*args, **kw)
- Create a new row object of the same length, with the same keys, but with new values produced by applying the given callables to the corresponding items. Callables given as
args
will be associated with values by their index and callables given as keywords will be associated with values by their key, column name.
While the mapping interfaces will provide most of the needed information, some additional properties are provided for consistency with statement and cursor objects.
Row.column_names
Property providing an ordered sequence of column names. The index corresponds to the row value-index that the name refers to.
>>> row[row.column_names[i]] == row[i]
Row Transformations¶
After a row is returned, sometimes the data in the row is not in the desired format. Further processing is needed if the row object is to going to be given to another piece of code which requires an object of differring consistency.
The transform
method on row objects provides a means to create a new row
object consisting of the old row’s items, but with certain columns transformed
using the given callables:
>>> row = db.prepare("""
... SELECT
... 'XX9301423'::text AS product_code,
... 2::int4 AS quantity,
... '4.92'::numeric AS total
... """).first()
>>> row
('XX9301423', 2, Decimal("4.92"))
>>> row.transform(quantity = str)
('XX9301423', '2', Decimal("4.92"))
transform
supports both positional and keyword arguments in order to
assign the callable for a column’s transformation:
>>> from operator import methodcaller
>>> row.transform(methodcaller('strip', 'XX'))
('9301423', 2, Decimal("4.92"))
Of course, more than one column can be transformed:
>>> stripxx = methodcaller('strip', 'XX')
>>> row.transform(stripxx, str, str)
('9301423', '2', '4.92')
None can also be used to indicate no transformation:
>>> row.transform(None, str, str)
('XX9301423', '2', '4.92')
More advanced usage can make use of lambdas for compound transformations in a single pass of the row:
>>> strip_and_int = lambda x: int(stripxx(x))
>>> row.transform(strip_and_int)
(9301423, 2, Decimal("4.92"))
Transformations will be, more often than not, applied against rows as opposed to a row. Using operator.methodcaller with map provides the necessary functionality to create simple iterables producing transformed row sequences:
>>> import decimal
>>> apply_tax = lambda x: (x * decimal.Decimal("0.1")) + x
>>> transform_row = methodcaller('transform', strip_and_int, None, apply_tax)
>>> r = map(transform_row, [row])
>>> list(r)
[(9301423, 2, Decimal('5.412'))]
And finally, functools.partial can be used to create a simple callable:
>>> from functools import partial
>>> transform_rows = partial(map, transform_row)
>>> list(transform_rows([row]))
[(9301423, 2, Decimal('5.412'))]
Queries¶
Queries in py-postgresql are single use prepared statements. They exist primarily for syntactic convenience, but they also allow the driver to recognize the short lifetime of the statement.
Single use statements are supported using the query
property on connection
objects, postgresql.api.Connection.query
. The statement object is not
available when using queries as the results, or handle to the results, are directly returned.
Queries have access to all execution methods:
Connection.query(sql, *parameters)
Connection.query.rows(sql, *parameters)
Connection.query.column(sql, *parameters)
Connection.query.first(sql, *parameters)
Connection.query.chunks(sql, *parameters)
Connection.query.declare(sql, *parameters)
Connection.query.load_rows(sql, collections.abc.Iterable(parameters))
Connection.query.load_chunks(collections.abc.Iterable(collections.abc.Iterable(parameters)))
In cases where a sequence of one-shot queries needs to be performed, it may be important to
avoid unnecessary repeat attribute resolution from the connection object as the query
property is an interface object created on access. Caching the target execution methods is
recommended:
qrows = db.query.rows
l = []
for x in my_queries:
l.append(qrows(x))
The characteristic of Each execution method is discussed in the prior Prepared Statements section.
Stored Procedures¶
The proc
method on postgresql.api.Database objects provides a means to
create a reference to a stored procedure on the remote database.
postgresql.api.StoredProcedure objects are used to represent the referenced
SQL routine.
This provides a direct interface to functions stored on the database. It leverages knowledge of the parameters and results of the function in order to provide the user with a natural interface to the procedure:
>>> func = db.proc('version()')
>>> func()
'PostgreSQL 8.3.6 on ...'
Stored Procedure Interface Points¶
It’s more-or-less a function, so there’s only one interface point:
func(*args, **kw)
(__call__
)Stored procedure objects are callable, executing a procedure will return an object of suitable representation for a given procedure’s type signature.
If it returns a single object, it will return the single object produced by the procedure.
If it’s a set returning function, it will return an iterable to the values produced by the procedure.
In cases of set returning function with multiple OUT-parameters, a cursor will be returned.
Stored Procedure Type Support¶
Stored procedures support most types of functions. “Function Types” being set returning functions, multiple-OUT parameters, and simple single-object returns.
Set-returning functions, SRFs return a sequence:
>>> generate_series = db.proc('generate_series(int,int)')
>>> gs = generate_series(1, 20)
>>> gs
<generator object <genexpr>>
>>> next(gs)
1
>>> list(gs)
[2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20]
For functions like generate_series()
, the driver is able to identify that
the return is a sequence of solitary integer objects, so the result of the
function is just that, a sequence of integers.
Functions returning composite types are recognized, and return row objects:
>>> db.execute("""
... CREATE FUNCTION composite(OUT i int, OUT t text)
... LANGUAGE SQL AS
... $body$
... SELECT 900::int AS i, 'sample text'::text AS t;
... $body$;
... """)
>>> composite = db.proc('composite()')
>>> r = composite()
>>> r
(900, 'sample text')
>>> r['i']
900
>>> r['t']
'sample text'
Functions returning a set of composites are recognized, and the result is a postgresql.api.Cursor object whose column names are consistent with the names of the OUT parameters:
>>> db.execute("""
... CREATE FUNCTION srfcomposite(out i int, out t text)
... RETURNS SETOF RECORD
... LANGUAGE SQL AS
... $body$
... SELECT 900::int AS i, 'sample text'::text AS t
... UNION ALL
... SELECT 450::int AS i, 'more sample text'::text AS t
... $body$;
... """)
>>> srfcomposite = db.proc('srfcomposite()')
>>> r = srfcomposite()
>>> next(r)
(900, 'sample text')
>>> v = next(r)
>>> v['i'], v['t']
(450, 'more sample text')
Transactions¶
Transactions are managed by creating an object corresponding to a
transaction started on the server. A transaction is a transaction block,
a savepoint, or a prepared transaction. The xact(...)
method on the
connection object provides the standard method for creating a
postgresql.api.Transaction object to manage a transaction on the connection.
The creation of a transaction object does not start the transaction. Rather, the
transaction must be explicitly started using the start()
method on the
transaction object. Usually, transactions should be managed with the context
manager interfaces:
>>> with db.xact():
... ...
The transaction in the above example is opened, started, by the __enter__
method invoked by the with-statement’s usage. It will be subsequently
committed or rolled-back depending on the exception state and the error state
of the connection when __exit__
is called.
Using the with-statement syntax for managing transactions is strongly
recommended. By using the transaction’s context manager, it allows for Python
exceptions to be properly treated as fatal to the transaction as when an
uncaught exception of any kind occurs within the block, it is unlikely that
the state of the transaction can be trusted. Additionally, the __exit__
method provides a safe-guard against invalid commits. This can occur if a
database error is inappropriately caught within a block without being raised.
The context manager interfaces are higher level interfaces to the explicit instruction methods provided by postgresql.api.Transaction objects.
Transaction Configuration¶
Keyword arguments given to xact()
provide the means for configuring the
properties of the transaction. Only three points of configuration are available:
isolation
The isolation level of the transaction. This must be a string. It will be interpolated directly into the START TRANSACTION statement. Normally, ‘SERIALIZABLE’ or ‘READ COMMITTED’:
>>> with db.xact('SERIALIZABLE'): ... ...mode
- A string, ‘READ ONLY’ or ‘READ WRITE’. States the mutability of stored information in the database. Like
isolation
, this is interpolated directly into the START TRANSACTION string.
The specification of any of these transaction properties imply that the transaction is a block. Savepoints do not take configuration, so if a transaction identified as a block is started while another block is running, an exception will be raised.
Transaction Interface Points¶
The methods available on transaction objects manage the state of the transaction and relay any necessary instructions to the remote server in order to reflect that change of state.
>>> x = db.xact(...)
x.start()
- Start the transaction.
x.commit()
- Commit the transaction.
x.rollback()
- Abort the transaction.
These methods are primarily provided for applications that manage transactions
in a way that cannot be formed around single, sequential blocks of code.
Generally, using these methods require additional work to be performed by the
code that is managing the transaction.
If usage of these direct, instructional methods is necessary, it is important to
note that if the database is in an error state when a transaction block’s
commit() is executed, an implicit rollback will occur. The transaction object
will simply follow instructions and issue the COMMIT
statement, and it will
succeed without exception.
Error Control¶
Handling database errors inside transaction CMs is generally discouraged as any database operation that occurs within a failed transaction is an error itself. It is important to trap any recoverable database errors outside of the scope of the transaction’s context manager:
>>> try:
... with db.xact():
... ...
... except postgresql.exceptions.UniqueError:
... pass
In cases where the database is in an error state, but the context exits without an exception, a postgresql.exceptions.InFailedTransactionError is raised by the driver:
>>> with db.xact():
... try:
... ...
... except postgresql.exceptions.UniqueError:
... pass
...
Traceback (most recent call last):
...
postgresql.exceptions.InFailedTransactionError: invalid block exit detected
CODE: 25P02
SEVERITY: ERROR
Normally, if a COMMIT
is issued on a failed transaction, the command implies a
ROLLBACK
without error. This is a very undesirable result for the CM’s exit
as it may allow for code to be ran that presumes the transaction was committed.
The driver intervenes here and raises the
postgresql.exceptions.InFailedTransactionError to safe-guard against such
cases. This effect is consistent with savepoint releases that occur during an
error state. The distinction between the two cases is made using the source
property on the raised exception.
Settings¶
SQL’s SHOW and SET provides a means to configure runtime parameters on the database(“GUC”s). In order to save the user some grief, a collections.abc.MutableMapping interface is provided to simplify configuration.
The settings
attribute on the connection provides the interface extension.
The standard dictionary interface is supported:
>>> db.settings['search_path'] = "$user,public"
And update(...)
is better performing for multiple sets:
>>> db.settings.update({
... 'search_path' : "$user,public",
... 'default_statistics_target' : "1000"
... })
Note
The transaction_isolation
setting cannot be set using the settings
mapping. Internally, settings
uses set_config
, which cannot adjust
that particular setting.
Settings Interface Points¶
Manipulation and interrogation of the connection’s settings is achieved by using the standard collections.abc.MutableMapping interfaces.
Connection.settings[k]
- Get the value of a single setting.
Connection.settings[k] = v
- Set the value of a single setting.
Connection.settings.update([(k1,v2), (k2,v2), ..., (kn,vn)])
- Set multiple settings using a sequence of key-value pairs.
Connection.settings.update({k1 : v1, k2 : v2, ..., kn : vn})
- Set multiple settings using a dictionary or mapping object.
Connection.settings.getset([k1, k2, ..., kn])
- Get a set of a settings. This is the most efficient way to get multiple settings as it uses a single request.
Connection.settings.keys()
- Get all available setting names.
Connection.settings.values()
- Get all setting values.
Connection.settings.items()
- Get a sequence of key-value pairs corresponding to all settings on the database.
Settings Management¶
postgresql.api.Settings objects can create context managers when called. This gives the user with the ability to specify sections of code that are to be ran with certain settings. The settings’ context manager takes full advantage of keyword arguments in order to configure the context manager:
>>> with db.settings(search_path = 'local,public', timezone = 'mst'):
... ...
postgresql.api.Settings objects are callable; the return is a context manager configured with the given keyword arguments representing the settings to use for the block of code that is about to be executed.
When the block exits, the settings will be restored to the values that they had before the block entered.
Type Support¶
The driver supports a large number of PostgreSQL types at the binary level. Most types are converted to standard Python types. The remaining types are usually PostgreSQL specific types that are converted into objects whose class is defined in postgresql.types.
When a conversion function is not available for a particular type, the driver will use the string format of the type and instantiate a str object for the data. It will also expect str data when parameter of a type without a conversion function is bound.
Note
Generally, these standard types are provided for convenience. If conversions into
these datatypes are not desired, it is recommended that explicit casts into
text
are made in statement string.
PostgreSQL Types | Python Types | SQL Types |
---|---|---|
postgresql.types.INT2OID | int | smallint |
postgresql.types.INT4OID | int | integer |
postgresql.types.INT8OID | int | bigint |
postgresql.types.FLOAT4OID | float | float |
postgresql.types.FLOAT8OID | float | double |
postgresql.types.VARCHAROID | str | varchar |
postgresql.types.BPCHAROID | str | char |
postgresql.types.XMLOID | xml.etree (cElementTree) | xml |
postgresql.types.DATEOID | datetime.date | date |
postgresql.types.TIMESTAMPOID | datetime.datetime | timestamp |
postgresql.types.TIMESTAMPTZOID | datetime.datetime (tzinfo) | timestamptz |
postgresql.types.TIMEOID | datetime.time | time |
postgresql.types.TIMETZOID | datetime.time | timetz |
postgresql.types.INTERVALOID | datetime.timedelta | interval |
postgresql.types.NUMERICOID | decimal.Decimal | numeric |
postgresql.types.BYTEAOID | bytes | bytea |
postgresql.types.TEXTOID | str | text |
<contrib_hstore> | dict | hstore |
The mapping in the above table normally goes both ways. So when a parameter is passed to a statement, the type should be consistent with the corresponding Python type. However, many times, for convenience, the object will be passed through the type’s constructor, so it is not always necessary.
Arrays¶
Arrays of PostgreSQL types are supported with near transparency. For simple arrays, arbitrary iterables can just be given as a statement’s parameter and the array’s constructor will consume the objects produced by the iterator into a postgresql.types.Array instance. However, in situations where the array has multiple dimensions, list objects are used to delimit the boundaries of the array.
>>> ps = db.prepare("select $1::int[]")
>>> ps.first([(1,2), (2,3)])
Traceback:
...
postgresql.exceptions.ParameterError
In the above case, it is apparent that this array is supposed to have two dimensions. However, this is not the case for other types:
>>> ps = db.prepare("select $1::point[]")
>>> ps.first([(1,2), (2,3)])
postgresql.types.Array([postgresql.types.point((1.0, 2.0)), postgresql.types.point((2.0, 3.0))])
Lists are used to provide the necessary boundary information:
>>> ps = db.prepare("select $1::int[]")
>>> ps.first([[1,2],[2,3]])
postgresql.types.Array([[1,2],[2,3]])
The above is the appropriate way to define the array from the original example.
Hint
The root-iterable object given as an array parameter does not need to be a list-type as it’s assumed to be made up of elements.
Composites¶
Composites are supported using postgresql.types.Row objects to represent the data. When a composite is referenced for the first time, the driver queries the database for information about the columns that make up the type. This information is then used to create the necessary I/O routines for packing and unpacking the parameters and columns of that type:
>>> db.execute("CREATE TYPE ctest AS (i int, t text, n numeric);")
>>> ps = db.prepare("SELECT $1::ctest")
>>> i = (100, 'text', "100.02013")
>>> r = ps.first(i)
>>> r["t"]
'text'
>>> r["n"]
Decimal("100.02013")
Or if use of a dictionary is desired:
>>> r = ps.first({'t' : 'just-the-text'})
>>> r
(None, 'just-the-text', None)
When a dictionary is given to construct the row, absent values are filled with None.
Database Messages¶
By default, py-postgresql gives detailed reports of messages emitted by the
database. Often, the verbosity is excessive due to single target processes or
existing application infrastructure for tracing the sources of various events.
Normally, this verbosity is not a significant problem as the driver defaults the
client_min_messages
setting to 'WARNING'
by default.
However, if NOTICE
or INFO
messages are needed, finer grained control
over message propagation may be desired, py-postgresql’s object relationship
model provides a common protocol for controlling message propagation and,
ultimately, display.
The msghook
attribute on elements–for instance, Statements, Connections,
and Connectors–is absent by default. However, when present on an object that
contributed the cause of a message event, it will be invoked with the Message,
postgresql.message.Message, object as its sole parameter. The attribute of
the object that is closest to the event is checked first, if present it will
be called. If the msghook()
call returns a True
value(specficially, bool(x) is True
), the message will not be
propagated any further. However, if a False value–notably, None–is
returned, the next element is checked until the list is exhausted and the
message is given to postgresql.sys.msghook. The normal list of elements is
as follows:
Output → Statement → Connection → Connector → Driver → postgresql.sys
Where Output
can be a postgresql.api.Cursor object produced by
declare(...)
or an implicit output management object used internally by
Statement.__call__()
and other statement execution methods. Setting the
msghook
attribute on postgresql.api.Statement gives very fine
control over raised messages. Consider filtering the notice message on create
table statements that implicitly create indexes:
>>> db = postgresql.open(...)
>>> db.settings['client_min_messages'] = 'NOTICE'
>>> ct_this = db.prepare('CREATE TEMP TABLE "this" (i int PRIMARY KEY)')
>>> ct_that = db.prepare('CREATE TEMP TABLE "that" (i int PRIMARY KEY)')
>>> def filter_notices(msg):
... if msg.details['severity'] == 'NOTICE':
... return True
...
>>> ct_that()
NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "that_pkey" for table "that"
...
('CREATE TABLE', None)
>>> ct_this.msghook = filter_notices
>>> ct_this()
('CREATE TABLE', None)
>>>
The above illustrates the quality of an installed msghook
that simply
inhibits further propagation of messages with a severity of ‘NOTICE’–but, only
notices coming from objects derived from the ct_this
postgresql.api.Statement object.
Subsequently, if the filter is installed on the connection’s msghook
:
>>> db = postgresql.open(...)
>>> db.settings['client_min_messages'] = 'NOTICE'
>>> ct_this = db.prepare('CREATE TEMP TABLE "this" (i int PRIMARY KEY)')
>>> ct_that = db.prepare('CREATE TEMP TABLE "that" (i int PRIMARY KEY)')
>>> def filter_notices(msg):
... if msg.details['severity'] == 'NOTICE':
... return True
...
>>> db.msghook = filter_notices
>>> ct_that()
('CREATE TABLE', None)
>>> ct_this()
('CREATE TABLE', None)
>>>
Any message with 'NOTICE'
severity coming from the connection, db
, will be
suffocated by the filter_notices
function. However, if a msghook
is
installed on either of those statements, it would be possible for display to
occur depending on the implementation of the hook installed on the statement
objects.
Message Metadata¶
PostgreSQL messages, postgresql.message.Message, are primarily described in three parts: the SQL-state code, the main message string, and a mapping containing the details. The follow attributes are available on message objects:
Message.message
- The primary message string.
Message.code
- The SQL-state code associated with a given message.
Message.source
- The origins of the message. Normally,
'SERVER'
or'CLIENT'
.Message.location
- A terse, textual representation of
'file'
,'line'
, and'function'
provided by the associateddetails
.Message.details
A mapping providing extended information about a message. This mapping object can contain the following keys:
'severity'
- Any of
'DEBUG'
,'INFO'
,'NOTICE'
,'WARNING'
,'ERROR'
,'FATAL'
, or'PANIC'
; the latter three are usually associated with a postgresql.exceptions.Error instance.'context'
- The CONTEXT portion of the message.
'detail'
- The DETAIL portion of the message.
'hint'
- The HINT portion of the message.
'position'
- A number identifying the position in the statement string that caused a parse error.
'file'
- The name of the file that emitted the message. (normally server information)
'function'
- The name of the function that emitted the message. (normally server information)
'line'
- The line of the file that emitted the message. (normally server information)
Client Parameters¶
Warning
The interfaces dealing with optparse are subject to change in 1.0.
There are various sources of parameters used by PostgreSQL client applications. The postgresql.clientparameters module provides a means for collecting and managing those parameters.
Connection creation interfaces in postgresql.driver are purposefully simple. All parameters taken by those interfaces are keywords, and are taken literally; if a parameter is not given, it will effectively be None. libpq-based drivers tend differ as they inherit some default client parameters from the environment. Doing this by default is undesirable as it can cause trivial failures due to unexpected parameter inheritance. However, using these parameters from the environment and other sources are simply expected in some cases: postgresql.open, postgresql.bin.pg_python, and other high-level utilities. The postgresql.clientparameters module provides a means to collect them into one dictionary object for subsequent application to a connection creation interface.
postgresql.clientparameters is primarily useful to script authors that want to
provide an interface consistent with PostgreSQL commands like psql
.
Collecting Parameters¶
The primary entry points in postgresql.clientparameters are postgresql.clientparameters.collect and postgresql.clientparameters.resolve_password.
For most purposes, collect
will suffice. By default, it will prompt for the
password if instructed to(-W
). Therefore, resolve_password
need not be
used in most cases:
>>> import sys
>>> import postgresql.clientparameters as pg_param
>>> p = pg_param.DefaultParser()
>>> co, ca = p.parse_args(sys.argv[1:])
>>> params = pg_param.collect(parsed_options = co)
The postgresql.clientparameters module is executable, so you can see the results of the above snippet by:
$ python -m postgresql.clientparameters -h localhost -U a_db_user -ssearch_path=public
{'host': 'localhost',
'password': None,
'port': 5432,
'settings': {'search_path': 'public'},
'user': 'a_db_user'}
postgresql.clientparameters.collect¶
Build a client parameter dictionary from the environment and parsed command
line options. The following is a list of keyword arguments that collect
will
accept:
parsed_options
- Options parsed by postgresql.clientparameters.StandardParser or postgresql.clientparameters.DefaultParser instances.
no_defaults
- When True, don’t include defaults like
pgpassfile
anduser
. Defaults to False.environ
- Environment variables to extract client parameter variables from. Defaults to os.environ and expects a collections.abc.Mapping interface.
environ_prefix
- Environment variable prefix to use. Defaults to “PG”. This allows the collection of non-standard environment variables whose keys are partially consistent with the standard variants. e.g. “PG_SRC_USER”, “PG_SRC_HOST”, etc.
default_pg_sysconfdir
- The location of the pg_service.conf file. The
PGSYSCONFDIR
environment variable will override this. When a default installation is present,PGINSTALLATION
, it should be set to this.pg_service_file
- Explicit location of the service file. This will override the “sysconfdir” based path.
prompt_title
- Descriptive title to use if a password prompt is needed. None to disable password resolution entirely. Setting this to None will also disable pgpassfile lookups, so it is necessary that further processing occurs when this is None.
parameters
- Base client parameters to use. These are set after the defaults are collected. (The defaults that can be disabled by
no_defaults
).
If prompt_title
is not set to None, it will prompt for the password when
instructed to do by the prompt_password
key in the parameters:
>>> import postgresql.clientparameters as pg_param
>>> p = pg_param.collect(prompt_title = 'my_prompt!', parameters = {'prompt_password':True})
Password for my_prompt![pq://dbusername@localhost:5432]:
>>> p
{'host': 'localhost', 'user': 'dbusername', 'password': 'secret', 'port': 5432}
If None, it will leave the necessary password resolution information in the
parameters dictionary for resolve_password
:
>>> p = pg_param.collect(prompt_title = None, parameters = {'prompt_password':True})
>>> p
{'pgpassfile': '/home/{USER}/.pgpass', 'prompt_password': True, 'host': 'localhost', 'user': 'dbusername', 'port': 5432}
Of course, 'prompt_password'
is normally specified when parsed_options
received a -W
option from the command line:
>>> op = pg_param.DefaultParser()
>>> co, ca = op.parse_args(['-W'])
>>> p = pg_param.collect(parsed_options = co)
>>> p=pg_param.collect(parsed_options = co)
Password for [pq://dbusername@localhost:5432]:
>>> p
{'host': 'localhost', 'user': 'dbusername', 'password': 'secret', 'port': 5432}
>>>
postgresql.clientparameters.resolve_password¶
Resolve the password for the given client parameters dictionary returned by
collect
. By default, this function need not be used as collect
will
resolve the password by default. resolve_password accepts the following
arguments:
parameters
- First positional argument. Normalized client parameters dictionary to update in-place with the resolved password. If the ‘prompt_password’ key is in
parameters
, it will prompt regardless(normally comes from-W
).getpass
- Function to call to prompt for the password. Defaults to getpass.getpass.
prompt_title
- Additional title to use if a prompt is requested. This can also be specified in the
parameters
as theprompt_title
key. This augments the IRI display on the prompt. Defaults to an empty string,''
.
The resolution process is effected by the contents of the given parameters
.
Notable keywords:
prompt_password
- If present in the given parameters, the user will be prompted for the using the given
getpass
function. This disables the password file lookup process.prompt_title
- This states a default prompt title to use. If the
prompt_title
keyword argument is given toresolve_password
, this will not be used.pgpassfile
- The PostgreSQL password file to lookup the password in. If the
password
parameter is present, this will not be used.
When resolution occurs, the prompt_password
, prompt_title
, and
pgpassfile
keys are removed from the given parameters dictionary:
>>> p=pg_param.collect(prompt_title = None)
>>> p
{'pgpassfile': '/home/{USER}/.pgpass', 'host': 'localhost', 'user': 'dbusername', 'port': 5432}
>>> pg_param.resolve_password(p)
>>> p
{'host': 'localhost', 'password': 'secret', 'user': 'dbusername', 'port': 5432}
Defaults¶
The following is a list of default parameters provided by collect
and the
sources of their values:
Key Value 'user'
getpass.getuser() or 'postgres'
'host'
postgresql.clientparameters.default_host ( 'localhost'
)'port'
postgresql.clientparameters.default_port ( 5432
)'pgpassfile'
"$HOME/.pgpassfile"
or[PGDATA]
+'pgpass.conf'
(Win32)'sslcrtfile'
[PGDATA]
+'postgresql.crt'
'sslkeyfile'
[PGDATA]
+'postgresql.key'
'sslrootcrtfile'
[PGDATA]
+'root.crt'
'sslrootcrlfile'
[PGDATA]
+'root.crl'
[PGDATA]
referenced in the above table is a directory whose path is platform
dependent. On most systems, it is "$HOME/.postgresql"
, but on Windows based
systems it is "%APPDATA%\postgresql"
Note
[PGDATA] is not an environment variable.
PostgreSQL Environment Variables¶
The following is a list of environment variables that will be collected by the
postgresql.clientparameter.collect function using “PG” as the
environ_prefix
and the keyword that it will be mapped to:
Environment Variable Keyword PGUSER
'user'
PGDATABASE
'database'
PGHOST
'host'
PGPORT
'port'
PGPASSWORD
'password'
PGSSLMODE
'sslmode'
PGSSLKEY
'sslkey'
PGCONNECT_TIMEOUT
'connect_timeout'
PGREALM
'kerberos4_realm'
PGKRBSRVNAME
'kerberos5_service'
PGPASSFILE
'pgpassfile'
PGTZ
'settings' = {'timezone': }
PGDATESTYLE
'settings' = {'datestyle': }
PGCLIENTENCODING
'settings' = {'client_encoding': }
PGGEQO
'settings' = {'geqo': }
PostgreSQL Password File¶
The password file is a simple newline separated list of :
separated fields. It
is located at $HOME/.pgpass
for most systems and at
%APPDATA%\postgresql\pgpass.conf
for Windows based systems. However, the
PGPASSFILE
environment variable may be used to override that location.
The lines in the file must be in the following form:
hostname:port:database:username:password
A single asterisk, *
, may be used to indicate that any value will match the
field. However, this only effects fields other than password
.
See http://www.postgresql.org/docs/current/static/libpq-pgpass.html for more details.
Client parameters produced by collect
that have not been processed
by resolve_password
will include a 'pgpassfile'
key. This is the value
that resolve_password
will use to locate the pgpassfile to interrogate if a
password key is not present and it is not instructed to prompt for a password.
Warning
Connection creation interfaces will not resolve 'pgpassfile'
, so it is
important that the parameters produced by collect()
are properly processed
before an attempt is made to establish a connection.
Cluster Management¶
py-postgresql provides cluster management tools in order to give the user fine-grained control over a PostgreSQL cluster and access to information about an installation of PostgreSQL.
Installations¶
postgresql.installation.Installation objects are primarily used to access PostgreSQL installation information. Normally, they are created using a dictionary constructed from the output of the pg_config executable:
from postgresql.installation import Installation, pg_config_dictionary
pg_install = Installation(pg_config_dictionary('/usr/local/pgsql/bin/pg_config'))
The extraction of pg_config information is isolated from Installation instantiation in order to allow Installations to be created from arbitrary dictionaries. This can be useful in cases where the installation layout is inconsistent with the standard PostgreSQL installation layout, or if a faux Installation needs to be created for testing purposes.
Installation Interface Points¶
Installation(info)
Instantiate an Installation using the given information. Normally, this information is extracted from a pg_config executable using postgresql.installation.pg_config_dictionary:
info = pg_config_dictionary('/usr/local/pgsql/bin/pg_config') pg_install = Installation(info)Installation.version
The installation’s version string:
pg_install.version 'PostgreSQL 9.0devel'Installation.version_info
A tuple containing the version’s
(major, minor, patch, state, level)
. Wheremajor
,minor
,patch
, andlevel
are int objects, andstate
is a str object:pg_install.version_info (9, 0, 0, 'devel', 0)Installation.ssl
- A bool indicating whether or not the installation has SSL support.
Installation.configure_options
The options given to the
configure
script that built the installation. The options are represented using a dictionary object whose keys are normalized long option names, and whose values are the option’s argument. If the option takes no argument, True will be used as the value.The normalization of the long option names consists of removing the preceding dashes, lowering the string, and replacing any dashes with underscores. For instance,
--enable-debug
will beenable_debug
:pg_install.configure_options {'enable_debug': True, 'with_libxml': True, 'enable_cassert': True, 'with_libedit_preferred': True, 'prefix': '/src/build/pg90', 'with_openssl': True, 'enable_integer_datetimes': True, 'enable_depend': True}Installation.paths
The paths of the installation as a dictionary where the keys are the path identifiers and the values are the absolute file system paths. For instance,
'bindir'
is associated with$PREFIX/bin
,'libdir'
is associated with$PREFIX/lib
, etc. The paths included in this dictionary are listed on the class’ attributes: Installation.pg_directories and Installation.pg_executables.The keys that point to installation directories are:
bindir
,docdir
,includedir
,pkgincludedir
,includedir_server
,libdir
,pkglibdir
,localedir
,mandir
,sharedir
, andsysconfdir
.The keys that point to installation executables are:
pg_config
,psql
,initdb
,pg_resetxlog
,pg_controldata
,clusterdb
,pg_ctl
,pg_dump
,pg_dumpall
,postgres
,postmaster
,reindexdb
,vacuumdb
,ipcclean
,createdb
,ecpg
,createuser
,createlang
,droplang
,dropuser
, andpg_restore
.Note
If the executable does not exist, the value will be None instead of an absoluate path.
To get the path to the psql executable:
from postgresql.installation import Installation pg_install = Installation('/usr/local/pgsql/bin/pg_config') psql_path = pg_install.paths['psql']
Clusters¶
postgresql.cluster.Cluster is the class used to manage a PostgreSQL cluster–a data directory created by initdb. A Cluster represents a data directory with respect to a given installation of PostgreSQL, so creating a postgresql.cluster.Cluster object requires a postgresql.installation.Installation, and a file system path to the data directory.
In part, a postgresql.cluster.Cluster is the Python programmer’s variant of the pg_ctl command. However, it goes beyond the basic process control functionality and extends into initialization and configuration as well.
A Cluster manages the server process using the subprocess module and
signals. The subprocess.Popen object, Cluster.daemon_process
, is
retained when the Cluster starts the server process itself. This gives
the Cluster access to the result code of server process when it exits, and the
ability to redirect stderr and stdout to a parameterized file object using
subprocess features.
Despite its use of subprocess, Clusters can control a server process
that was not started by the Cluster’s start
method.
Initializing Clusters¶
postgresql.cluster.Cluster provides a method for initializing a
Cluster’s data directory, init
. This method provides a Python interface to
the PostgreSQL initdb command.
init
is a regular method and accepts a few keyword parameters. Normally,
parameters are directly mapped to initdb command options. However, password
makes use of initdb’s capability to read the superuser’s password from a file.
To do this, a temporary file is allocated internally by the method:
from postgresql.installation import Installation, pg_config_dictionary
from postgresql.cluster import Cluster
pg_install = Installation(pg_config_dictionary('/usr/local/pgsql/bin/pg_config'))
pg_cluster = Cluster(pg_install, 'pg_data')
pg_cluster.init(user = 'pg', password = 'secret', encoding = 'utf-8')
The init method will block until the initdb command is complete. Once initialized, the Cluster may be configured.
Configuring Clusters¶
A Cluster’s configuration file can be manipulated using the
Cluster.settings mapping. The mapping’s methods will always access the
configuration file, so it may be desirable to cache repeat reads. Also, if
multiple settings are being applied, using the update()
method may be
important to avoid writing the entire file multiple times:
pg_cluster.settings.update({'listen_addresses' : 'localhost', 'port' : '6543'})
Similarly, to avoid opening and reading the entire file multiple times, Cluster.settings.getset should be used to retrieve multiple settings:
d = pg_cluster.settings.getset(set(('listen_addresses', 'port')))
d
{'listen_addresses' : 'localhost', 'port' : '6543'}
Values contained in settings
are always Python strings:
assert pg_cluster.settings['max_connections'].__class__ is str
The postgresql.conf
file is only one part of the server configuration.
Structured access and manipulation of the pg_hba file is not
supported. Clusters only provide the file path to the pg_hba file:
hba = open(pg_cluster.hba_file)
If the configuration of the Cluster is altered while the server process is
running, it may be necessary to signal the process that configuration changes
have been made. This signal can be sent using the Cluster.reload()
method.
Cluster.reload()
will send a SIGHUP signal to the server process. However,
not all changes to configuration settings can go into effect after calling
Cluster.reload()
. In those cases, the server process will need to be
shutdown and started again.
Controlling Clusters¶
The server process of a Cluster object can be controlled with the start()
,
stop()
, shutdown()
, kill()
, and restart()
methods.
These methods start the server process, signal the server process, or, in the
case of restart, a combination of the two.
When a Cluster starts the server process, it’s ran as a subprocess. Therefore,
if the current process exits, the server process will exit as well. start()
does not automatically daemonize the server process.
Note
Under Microsoft Windows, above does not hold true. The server process will continue running despite the exit of the parent process.
To terminate a server process, one of these three methods should be called:
stop
, shutdown
, or kill
. stop
is a graceful shutdown and will
wait for all clients to disconnect before shutting down. shutdown
will
close any open connections and safely shutdown the server process.
kill
will immediately terminate the server process leading to recovery upon
starting the server process again.
Note
Using kill
may cause shared memory to be leaked.
Normally, Cluster.shutdown is the appropriate way to terminate a server process.
Cluster Interface Points¶
Methods and properties available on postgresql.cluster.Cluster instances:
Cluster(installation, data_directory)
Create a postgresql.cluster.Cluster object for the specified postgresql.installation.Installation, and
data_directory
.The
data_directory
must be an absoluate file system path. The directory does not need to exist. Theinit()
method may later be used to create the cluster.Cluster.installation
- The Cluster’s postgresql.installation.Installation instance.
Cluster.data_directory
- The absolute path to the PostgreSQL data directory. This directory may not exist.
Cluster.init([encoding = None[, user = None[, password = None]]])
Run the initdb executable of the configured installation to initialize the cluster at the configured data directory, Cluster.data_directory.
encoding
is mapped to-E
, the default database encoding. By default, the encoding is determined from the environment’s locale.
user
is mapped to-U
, the database superuser name. By default, the current user’s name.
password
is ultimately mapped to--pwfile
. The argument given to the long option is actually a path to the temporary file that holds the given password.Raises postgresql.cluster.InitDBError when initdb returns a non-zero result code.
Raises postgresql.cluster.ClusterInitializationError when there is no initdb in the Installation.
Cluster.initialized()
- Whether or not the data directory exists, and if it looks like a PostgreSQL data directory. Meaning, the directory must contain a
postgresql.conf
file and abase
directory.Cluster.drop()
- Shutdown the Cluster’s server process and completely remove the Cluster.data_directory from the file system.
Cluster.pid()
- The server’s process identifier as a Python int. None if there is no server process running. This is a method rather than a property as it may read the PID from a file in cases where the server process was not started by the Cluster.
Cluster.start([logfile = None[, settings = None]])
Start the PostgreSQL server process for the Cluster if it is not already running. This will execute postgres as a subprocess.
If
logfile
, an opened and writable file object, is given, stderr and stdout will be redirected to that file. By default, both stderr and stdout are closed.If
settings
is given, the mapping or sequence of pairs will be used as long options to the subprocess. For each item,--{key}={value}
will be given as an argument to the subprocess.Cluster.running()
- Whether or not the cluster’s server process is running. Returns True or False. Even if True is returned, it does not mean that the server process is ready to accept connections.
Cluster.ready_for_connections()
Whether or not the Cluster is ready to accept connections. Usually called after Cluster.start.
Returns True when the Cluster can accept connections, False when it cannot, and None if the Cluster’s server process is not running at all.
Cluster.wait_until_started([timeout = 10[, delay = 0.05]])
Blocks the process until the cluster is identified as being ready for connections. Usually called after
Cluster.start()
.Raises postgresql.cluster.ClusterNotRunningError if the server process is not running at all.
Raises postgresql.cluster.ClusterTimeoutError if Cluster.ready_for_connections() does not return True within the given timeout period.
Raises postgresql.cluster.ClusterStartupError if the server process terminates while polling for readiness.
timeout
anddelay
are both in seconds. Wheretimeout
is the maximum time to wait for the Cluster to be ready for connections, anddelay
is the time to sleep between calls to Cluster.ready_for_connections().Cluster.stop()
- Signal the cluster to shutdown when possible. The server will wait for all clients to disconnect before shutting down.
Cluster.shutdown()
- Signal the cluster to shutdown immediately. Any open client connections will be closed.
Cluster.kill()
- Signal the absolute destruction of the server process(SIGKILL). This will require recovery when the cluster is started again. Shared memory may be leaked.
Cluster.wait_until_stopped([timeout = 10[, delay = 0.05]])
Blocks the process until the cluster is identified as being shutdown. Usually called after Cluster.stop or Cluster.shutdown.
Raises postgresql.cluster.ClusterTimeoutError if Cluster.ready_for_connections does not return None within the given timeout period.
Cluster.reload()
- Signal the server that it should reload its configuration files(SIGHUP). Usually called after manipulating Cluster.settings or modifying the contents of Cluster.hba_file.
Cluster.restart([logfile = None[, settings = None[, timeout = 10]]])
Stop the server process, wait until it is stopped, start the server process, and wait until it has started.
Note
This calls
Cluster.stop()
, so it will wait until clients disconnect before starting up again.The
logfile
andsettings
parameters will be given to Cluster.start.timeout
will be given to Cluster.wait_until_stopped and Cluster.wait_until_started.Cluster.settings
A collections.abc.Mapping interface to the
postgresql.conf
file of the cluster.A notable extension to the mapping interface is the
getset
method. This method will return a dictionary object containing the settings whose names were contained in the set object given to the method. This method should be used when multiple settings need to be retrieved from the configuration file.Cluster.hba_file
- The path to the cluster’s pg_hba file. This property respects the HBA file location setting in
postgresql.conf
. Usually,$PGDATA/pg_hba.conf
.Cluster.daemon_path
- The path to the executable to use to start the server process.
Cluster.daemon_process
- The subprocess.Popen instance of the server process. None if the server process was not started or was not started using the Cluster object.
Notification Management¶
Relevant SQL commands: NOTIFY, LISTEN, UNLISTEN.
Asynchronous notifications offer a means for PostgreSQL to signal application code. Often these notifications are used to signal cache invalidation. In 9.0 and greater, notifications may include a “payload” in which arbitrary data may be delivered on a channel being listened to.
By default, received notifications will merely be appended to an internal list on the connection object. This list will remain empty for the duration of a connection unless the connection begins listening to a channel that receives notifications.
The postgresql.notifyman.NotificationManager class is used to wait for messages to come in on a set of connections, pick up the messages, and deliver the messages to the object’s user via the collections.Iterator protocol.
Listening on a Single Connection¶
The db.iternotifies()
method is a simplification of the notification manager. It
returns an iterator to the notifications received on the subject connection.
The iterator yields triples consisting of the channel
being
notified, the payload
sent with the notification, and the pid
of the
backend that caused the notification:
>>> db.listen('for_rabbits')
>>> db.notify('for_rabbits')
>>> for x in db.iternotifies():
... channel, payload, pid = x
... break
>>> assert channel == 'for_rabbits'
True
>>> assert payload == ''
True
>>> assert pid == db.backend_id
True
The iterator, by default, will continue listening forever unless the connection
is terminated–thus the immediate break
statement in the above loop. In
cases where some additional activity is necessary, a timeout parameter may be
given to the iternotifies
method in order to allow “idle” events to occur
at the designated frequency:
>>> for x in db.iternotifies(0.5):
... if x is None:
... break
The above example illustrates that idle events are represented using None
objects. Idle events are guaranteed to occur approximately at the
specified interval–the timeout
keyword parameter. In addition to
providing a means to do other processing or polling, they also offer a safe
break point for the loop. Internally, the iterator produced by the
iternotifies
method is a NotificationManager, which will localize the
notifications prior to emitting them via the iterator.
It’s not safe to break out of the loop, unless an idle event is being handled.
If the loop is broken while a regular event is being processed, some events may
remain in the iterator. In order to consume those events, the iterator must
be accessible.
The iterator will be exhausted when the connection is closed, but if the connection is closed during the loop, any remaining notifications will be emitted prior to the loop ending, so it is important to be prepared to handle exceptions or check for a closed connection.
In situations where multiple connections need to be watched, direct use of the NotificationManager is necessary.
Listening on Multiple Connections¶
The postgresql.notifyman.NotificationManager class is used to manage connections that are expecting to receive notifications. Instances are iterators that yield the connection object and notifications received on the connection or None in the case of an idle event. The manager emits events as a pair; the connection object that received notifications, and all the notifications picked up on that connection:
>>> from postgresql.notifyman import NotificationManager
>>> # Using ``nm`` to reference the manager from here on.
>>> nm = NotificationManager(db1, db2, ..., dbN)
>>> nm.settimeout(2)
>>> for x in nm:
... if x is None:
... # idle
... break
...
... db, notifies = x
... for channel, payload, pid in notifies:
... ...
The manager will continue to wait for and emit events so long as there are
good connections available in the set; it is possible for connections to be
added and removed at any time. Although, in rare circumstances, discarded
connections may still have pending events if it not removed during an idle
event. The connections
attribute on NotificationManager objects is a
set object that may be used directly in order to add and remove connections
from the manager:
>>> y = []
>>> for x in nm:
... if x is None:
... if y:
... nm.connections.add(y[0])
... del y[0]
...
The notification manager is resilient; if a connection dies, it will discard the
connection from the set, and add it to the set of bad connections, the
garbage
attribute. In these cases, the idle event should be leveraged to
check for these failures if that’s a concern. It is the user’s
responsibility to explicitly handle the failure cases, and remove the bad
connections from the garbage
set:
>>> for x in nm:
... if x is None:
... if nm.garbage:
... recovered = take_out_trash(nm.garbage)
... nm.connections.update(recovered)
... nm.garbage.clear()
... db, notifies = x
... for channel, payload, pid in notifies:
... ...
Explicitly removing connections from the set can also be a means to gracefully terminate the event loop:
>>> for x in nm:
... if x in None:
... if done_listening is True:
... nm.connections.clear()
However, doing so inside the loop is not a requirement; it is safe to remove a connection from the set at any point.
Notification Managers¶
The postgresql.notifyman.NotificationManager is an event loop that services multiple connections. In cases where only one connection needs to be serviced, the postgresql.api.Database.iternotifies method can be used to simplify the process.
Notification Manager Constructors¶
NotificationManager(*connections, timeout = None)
- Create a NotificationManager instance that manages the notifications coming from the given set of connections. The
timeout
keyword is optional and can be configured using thesettimeout
method as well.
Notification Manager Interface Points¶
NotificationManager.__iter__()
- Returns the instance; it is an iterator.
NotificationManager.__next__()
- Normally, yield the pair, connection and notifications list, when the next event is received. If a timeout is configured, None may be yielded to signal an idle event. The notifications list is a list of triples:
(channel, payload, pid)
.NotificationManager.settimeout(timeout : int)
- Set the amount of time to wait before the manager yields an idle event. If zero, the manager will never wait and only yield notifications that are immediately available. If None, the manager will never emit idle events.
NotificationManager.gettimeout() -> [int, None]
- Get the configured timeout; returns either None, or an int.
NotificationManager.connections
- The set of connections that the manager is actively watching for notifications. Connections may be added or removed from the set at any time.
NotificationManager.garbage
- The set of connections that failed. Normally empty, but when a connection gets an exceptional condition or explicitly raises an exception, it is removed from the
connections
set, and placed ingarbage
.
Zero Timeout¶
When a timeout of zero, 0
, is configured, the notification manager will
terminate early. Specifically, each connection will be polled for any pending
notifications, and once all of the collected notifications have been emitted
by the iterator, StopIteration will be raised. Notably, no idle events will
occur when the timeout is configured to zero.
Zero timeouts offer a means for the notification “queue” to be polled. Often, this is the appropriate way to collect pending notifications on active connections where using the connection exclusively for waiting is not practical:
>>> notifies = list(db.iternotifies(0))
Or with a NotificationManager instance:
>>> nm.settimeout(0)
>>> db_notifies = list(nm)
In both cases of zero timeout, the iterator may be promptly discarded without losing any events.
Summary of Characteristics¶
- The iterator will continue until the connections die.
- Objects yielded by the iterator are either None, an “idle event”, or an individual notification triple if using
db.iternotifies()
, or a(db, notifies)
pair if using the base NotificationManager.- When a connection dies or raises an exception, it will be removed from the
nm.connections
set and added to thenm.garbage
set.- The NotificationManager instance will not hold any notifications during an idle event. Idle events offer a break point in which the manager may be discarded.
- A timeout of zero will cause the iterator to only yield the events that are pending right now, and promptly end. However, the same manager object may be used again.
- A notification triple is a tuple consisting of
(channel, payload, pid)
.- Connections may be added and removed from the
nm.connections
set at any time.
Advisory Locks¶
Warning
postgresql.alock is a new feature in v1.0.
Explicit Locking in PostgreSQL.
PostgreSQL’s advisory locks offer a cooperative synchronization primitive. These are used in cases where an application needs access to a resource, but using table locks may cause interference with other operations that can be safely performed alongside the application-level, exclusive operation.
Advisory locks can be used by directly executing the stored procedures in the
database or by using the postgresql.alock.ALock
subclasses, which
provides a context manager that uses those stored procedures.
Currently, only two subclasses exist. Each represents the lock mode supported by PostgreSQL’s advisory locks:
Acquiring ALocks¶
An ALock instance represents a sequence of advisory locks. A single ALock can acquire and release multiple advisory locks by creating the instance with multiple lock identifiers:
>>> from postgresql import alock
>>> table1_oid = 192842
>>> table2_oid = 192849
>>> l = alock.ExclusiveLock(db, (table1_oid, 0), (table2_oid, 0))
>>> l.acquire()
>>> ...
>>> l.release()
postgresql.alock.ALock
is similar to threading.RLock
; in
order for an ALock to be released, it must be released the number of times it
has been acquired. ALocks are associated with and survived by their session.
Much like how RLocks are associated with the thread they are acquired in:
acquiring an ALock again will merely increment its count.
PostgreSQL allows advisory locks to be identified using a pair of int4 or a single int8. ALock instances represent a sequence of those identifiers:
>>> from postgresql import alock
>>> ids = [(0,0), 0, 1]
>>> with alock.ShareLock(db, *ids):
... ...
Both types of identifiers may be used within the same ALock, and, regardless of
their type, will be aquired in the order that they were given to the class’
constructor. In the above example, (0,0)
is acquired first, then 0
, and
lastly 1
.
ALocks¶
postgresql.alock.ALock is abstract; it defines the interface and some common functionality. The lock mode is selected by choosing the appropriate subclass.
There are two:
postgresql.alock.ExclusiveLock(database, *identifiers)
- Instantiate an ALock object representing the identifiers for use with the database. Exclusive locks will conflict with other exclusive locks and share locks.
postgresql.alock.ShareLock(database, *identifiers)
- Instantiate an ALock object representing the identifiers for use with the database. Share locks can be acquired when a share lock with the same identifier has been acquired by another backend. However, an exclusive lock with the same identifier will conflict.
ALock Interface Points¶
Methods and properties available on postgresql.alock.ALock
instances:
alock.acquire(blocking = True)
Acquire the advisory locks represented by the
alock
object. If blocking is True, the default, the method will block until locks on all the identifiers have been acquired.If blocking is False, acquisition may not block, and success will be indicated by the returned object: True if all lock identifiers were acquired and False if any of the lock identifiers could not be acquired.
alock.release()
- Release the advisory locks represented by the
alock
object. If the lock has not been acquired, a RuntimeError will be raised.alock.locked()
- Returns a boolean describing whether the locks are held or not. This will return False if the lock connection has been closed.
alock.__enter__()
- Alias to
acquire
; context manager protocol. Always blocking.alock.__exit__(typ, val, tb)
- Alias to
release
; context manager protocol.
Copy Management¶
The postgresql.copyman module provides a way to quickly move COPY data coming from one connection to many connections. Alternatively, it can be sourced by arbitrary iterators and target arbitrary callables.
Statement execution methods offer a way for running COPY operations with iterators, but the cost of allocating objects for each row is too significant for transferring gigabytes of COPY data from one connection to another. The interfaces available on statement objects are primarily intended to be used when transferring COPY data to and from arbitrary Python objects.
Direct connection-to-connection COPY operations can be performed using the high-level postgresql.copyman.transfer function:
>>> from postgresql import copyman
>>> send_stmt = source.prepare("COPY (SELECT i FROM generate_series(1, 1000000) AS g(i)) TO STDOUT")
>>> destination.execute("CREATE TEMP TABLE loading_table (i int8)")
>>> receive_stmt = destination.prepare("COPY loading_table FROM STDIN")
>>> total_rows, total_bytes = copyman.transfer(send_stmt, receive_stmt)
However, if more control is needed, the postgresql.copyman.CopyManager class should be used directly.
Copy Managers¶
The postgresql.copyman.CopyManager class manages the Producer and the Receivers involved in a COPY operation. Normally, postgresql.copyman.StatementProducer and postgresql.copyman.StatementReceiver instances. Naturally, a Producer is the object that produces the COPY data to be given to the Manager’s Receivers.
Using a Manager directly means that there is a need for more control over the operation. The Manager is both a context manager and an iterator. The context manager interfaces handle initialization and finalization of the COPY state, and the iterator provides an event loop emitting information about the amount of COPY data transferred this cycle. Normal usage takes the form:
>>> from postgresql import copyman
>>> send_stmt = source.prepare("COPY (SELECT i FROM generate_series(1, 1000000) AS g(i)) TO STDOUT")
>>> destination.execute("CREATE TEMP TABLE loading_table (i int8)")
>>> receive_stmt = destination.prepare("COPY loading_table FROM STDIN")
>>> producer = copyman.StatementProducer(send_stmt)
>>> receiver = copyman.StatementReceiver(receive_stmt)
>>>
>>> with source.xact(), destination.xact():
... with copyman.CopyManager(producer, receiver) as copy:
... for num_messages, num_bytes in copy:
... update_rate(num_bytes)
As an alternative to a for-loop inside a with-statement block, the run method can be called to perform the operation:
>>> with source.xact(), destination.xact():
... copyman.CopyManager(producer, receiver).run()
However, there is little benefit beyond using the high-level postgresql.copyman.transfer function.
Manager Interface Points¶
Primarily, the postgresql.copyman.CopyManager provides a context manager and an iterator for controlling the COPY operation.
CopyManager.run()
- Perform the entire COPY operation.
CopyManager.__enter__()
- Start the COPY operation. Connections taking part in the COPY should not be used until
__exit__
is ran.CopyManager.__exit__(typ, val, tb)
- Finish the COPY operation. Fails in the case of an incomplete COPY, or an untrapped exception. Either returns None or raises the generalized exception, postgresql.copyman.CopyFail.
CopyManager.__iter__()
- Returns the CopyManager instance.
CopyManager.__next__()
Transfer the next chunk of COPY data to the receivers. Yields a tuple consisting of the number of messages and bytes transferred,
(num_messages, num_bytes)
. Raises StopIteration when complete.Raises postgresql.copyman.ReceiverFault when a Receiver raises an exception. Raises postgresql.copyman.ProducerFault when the Producer raises an exception. The original exception is available via the exception’s
__context__
attribute.CopyManager.reconcile(faulted_receiver)
- Reconcile a faulted receiver. When a receiver faults, it will no longer be in the set of Receivers. This method is used to signal to the manager that the problem has been corrected, and the receiver is again ready to receive.
CopyManager.receivers
- The builtins.set of Receivers involved in the COPY operation.
CopyManager.producer
- The Producer emitting the data to be given to the Receivers.
Faults¶
The CopyManager generalizes any exceptions that occur during transfer. While inside the context manager, postgresql.copyman.Fault may be raised if a Receiver or a Producer raises an exception. A postgresql.copyman.ProducerFault in the case of the Producer, and postgresql.copyman.ReceiverFault in the case of the Receivers.
Note
Faults are only raised by postgresql.copyman.CopyManager.__next__. The
run()
method will only raise postgresql.copyman.CopyFail.
Receiver Faults¶
The Manager assumes the Fault is fatal to a Receiver, and immediately removes it from the set of target receivers. Additionally, if the Fault exception goes untrapped, the copy will ultimately fail.
The Fault exception references the Manager that raised the exception, and the actual exceptions that occurred associated with the Receiver that caused them.
In order to identify the exception that caused a Fault, the faults
attribute
on the postgresql.copyman.ReceiverFault must be referenced:
>>> from postgresql import copyman
>>> send_stmt = source.prepare("COPY (SELECT i FROM generate_series(1, 1000000) AS g(i)) TO STDOUT")
>>> destination.execute("CREATE TEMP TABLE loading_table (i int8)")
>>> receive_stmt = destination.prepare("COPY loading_table FROM STDIN")
>>> producer = copyman.StatementProducer(send_stmt)
>>> receiver = copyman.StatementReceiver(receive_stmt)
>>>
>>> with source.xact(), destination.xact():
... with copyman.CopyManager(producer, receiver) as copy:
... while copy.receivers:
... try:
... for num_messages, num_bytes in copy:
... update_rate(num_bytes)
... break
... except copyman.ReceiverFault as cf:
... # Access the original exception using the receiver as the key.
... original_exception = cf.faults[receiver]
... if unknown_failure(original_exception):
... ...
... raise
ReceiverFault Properties¶
The following attributes exist on postgresql.copyman.ReceiverFault instances:
ReceiverFault.manager
- The subject postgresql.copyman.CopyManager instance.
ReceiverFault.faults
- A dictionary mapping the Receiver to the exception raised by that Receiver.
Reconciliation¶
When a postgresql.copyman.ReceiverFault is raised, the Manager immediately removes the Receiver so that the COPY operation can continue. Continuation of the COPY can occur by trapping the exception and continuing the iteration of the Manager. However, if the fault is recoverable, the postgresql.copyman.CopyManager.reconcile method must be used to reintroduce the Receiver into the Manager’s set. Faults must be trapped from within the Manager’s context:
>>> import socket
>>> from postgresql import copyman
>>> send_stmt = source.prepare("COPY (SELECT i FROM generate_series(1, 1000000) AS g(i)) TO STDOUT")
>>> destination.execute("CREATE TEMP TABLE loading_table (i int8)")
>>> receive_stmt = destination.prepare("COPY loading_table FROM STDIN")
>>> producer = copyman.StatementProducer(send_stmt)
>>> receiver = copyman.StatementReceiver(receive_stmt)
>>>
>>> with source.xact(), destination.xact():
... with copyman.CopyManager(producer, receiver) as copy:
... while copy.receivers:
... try:
... for num_messages, num_bytes in copy:
... update_rate(num_bytes)
... except copyman.ReceiverFault as cf:
... if isinstance(cf.faults[receiver], socket.timeout):
... copy.reconcile(receiver)
... else:
... raise
Recovering from Faults does add significant complexity to a COPY operation, so, often, it’s best to avoid conditions in which reconciliable Faults may occur.
Producer Faults¶
Producer faults are normally fatal to the COPY operation and should rarely be trapped. However, the Manager makes no state changes when a Producer faults, so, unlike Receiver Faults, no reconciliation process is necessary; rather, if it’s safe to continue, the Manager’s iterator should continue to be processed.
ProducerFault Properties¶
The following attributes exist on postgresql.copyman.ProducerFault instances:
ReceiverFault.manager
- The subject postgresql.copyman.CopyManager.
ReceiverFault.__context__
- The original exception raised by the Producer.
Failures¶
When a COPY operation is aborted, either by an exception or by the iterator
being broken, a postgresql.copyman.CopyFail exception will be raised by the
Manager’s __exit__()
method. The postgresql.copyman.CopyFail exception
offers to record any exceptions that occur during the exit of the context
managers of the Producer and the Receivers.
CopyFail Properties¶
The following properties exist on postgresql.copyman.CopyFail exceptions:
CopyFail.manager
- The Manager whose COPY operation failed.
CopyFail.receiver_faults
- A dictionary mapping a postgresql.copyman.Receiver to the exception raised by that Receiver’s
__exit__
. None if no exceptions were raised by the Receivers.CopyFail.producer_fault
- The exception Raised by the postgresql.copyman.Producer. None if none.
Producers¶
The following Producers are available:
postgresql.copyman.StatementProducer(postgresql.api.Statement)
- Given a Statement producing COPY data, construct a Producer.
postgresql.copyman.IteratorProducer(collections.abc.Iterator)
- Given an Iterator producing chunks of COPY lines, construct a Producer to manage the data coming from the iterator.
Receivers¶
postgresql.copyman.StatementReceiver(postgresql.api.Statement)
- Given a Statement producing COPY data, construct a Producer.
postgresql.copyman.CallReceiver(callable)
- Given a callable, construct a Receiver that will transmit COPY data in chunks of lines. That is, the callable will be given a list of COPY lines for each transfer cycle.
Terminology¶
The following terms are regularly used to describe the implementation and processes of the postgresql.copyman module:
- Manager
- The object used to manage data coming from a Producer and being given to the Receivers. It also manages the necessary initialization and finalization steps required by those factors.
- Producer
- The object used to produce the COPY data to be given to the Receivers. The source.
- Receiver
- An object that consumes COPY data. A target.
- Fault
- Specifically, postgresql.copyman.Fault exceptions. A Fault is raised when a Receiver or a Producer raises an exception during the COPY operation.
- Reconciliation
- Generally, the steps performed by the “reconcile” method on postgresql.copyman.CopyManager instances. More precisely, the necessary steps for a Receiver’s reintroduction into the COPY operation after a Fault.
- Failed Copy
- A failed copy is an aborted COPY operation. This occurs in situations of untrapped exceptions or an incomplete COPY. Specifically, the COPY will be noted as failed in cases where the Manager’s iterator is not ran until exhaustion.
- Realignment
- The process of providing compensating data to the Receivers so that the connection will be on a message boundary. Occurs when the COPY operation is aborted.
Gotchas¶
It is recognized that decisions were made that may not always be ideal for a given user. In order to highlight those potential issues and hopefully bring some sense into a confusing situation, this document was drawn.
Thread Safety¶
py-postgresql connection operations are not thread safe.
client_encoding setting should be altered carefully¶
postgresql.driver’s streaming cursor implementation reads a fixed set of rows when it queries the server for more. In order to optimize some situations, the driver will send a request for more data, but makes no attempt to wait and process the data as it is not yet needed. When the user comes back to read more data from the cursor, it will then look at this new data. The problem being, if client_encoding was switched, it may use the wrong codec to transform the wire data into higher level Python objects(str).
To avoid this problem from ever happening, set the client_encoding early. Furthermore, it is probably best to never change the client_encoding as the driver automatically makes the necessary transformation to Python strings.
The user and password is correct, but it does not work when using postgresql.driver¶
This issue likely comes from the possibility that the information sent to the server early in the negotiation phase may not be in an encoding that is consistent with the server’s encoding.
One problem is that PostgreSQL does not provide the client with the server encoding early enough in the negotiation phase, and, therefore, is unable to process the password data in a way that is consistent with the server’s expectations.
Another problem is that PostgreSQL takes much of the data in the startup message as-is, so a decision about the best way to encode parameters is difficult.
The easy way to avoid most issues with this problem is to initialize the database in the utf-8 encoding. The driver defaults the expected server encoding to utf-8. However, this can be overridden by creating the Connector with a server_encoding parameter. Setting server_encoding to the proper value of the target server will allow the driver to properly encode some of the parameters. Also, any GUC parameters passed via the settings parameter should use typed objects when possible to hint that the server encoding should not be used on that parameter(bytes, for instance).
Backslash characters are being treated literally¶
The driver enables standard compliant strings. Stop using non-standard features. ;)
If support for non-standard strings was provided it would require to the driver to provide subjective quote interfaces(eg, db.quote_literal). Doing so is not desirable as it introduces difficulties for the driver and the user.
Types without binary support in the driver are unsupported in arrays and records¶
When an array or composite type is identified, postgresql.protocol.typio ultimately chooses the binary format for the transfer of the column or parameter. When this is done, PostgreSQL will pack or expect all the values in binary format as well. If that binary format is not supported and the type is not an string, it will fail to unpack the row or pack the appropriate data for the element or attribute.
In most cases issues related to this can be avoided with explicit casts to text.
NOTICEs, WARNINGs, and other messages are too verbose¶
For many situations, the information provided with database messages is far too verbose. However, considering that py-postgresql is a programmer’s library, the default of high verbosity is taken with the express purpose of allowing the programmer to “adjust the volume” until appropriate.
By default, py-postgresql adjusts the client_min_messages
to only emit
messages at the WARNING level or higher–ERRORs, FATALs, and PANICs.
This reduces the number of messages generated by most connections dramatically.
If further customization is needed, the Database Messages section has information on overriding the default action taken with database messages.
Strange TypeError using load_rows() or load_chunks()¶
When a prepared statement is directly executed using __call__()
, it can easily
validate that the appropriate number of parameters are given to the function.
When load_rows()
or load_chunks()
is used, any tuple in the
the entire sequence can cause this TypeError during the loading process:
TypeError: inconsistent items, N processors and M items in row
This exception is raised by a generic processing routine whose functionality is abstract in nature, so the message is abstract as well. It essentially means that a tuple in the sequence given to the loading method had too many or too few items.
Non-English Locales¶
In the past, some builds of PostgreSQL localized the severity field of some protocol messages. py-postgresql expects these fields to be consistent with their english terms. If the driver raises strange exceptions during the use of non-english locales, it may be necessary to use an english setting in order to coax the server into issueing familiar terms.
Reference¶
Commands¶
This chapter discusses the usage of the available console scripts.
postgresql.bin.pg_python¶
The pg_python
command provides a simple way to write Python scripts against a
single target database. It acts like the regular Python console command, but
takes standard PostgreSQL options as well to specify the client parameters
to make establish connection with. The Python environment is then augmented
with the following built-ins:
db
- The PG-API connection object.
xact
db.xact
, the transaction creator.settings
db.settings
prepare
db.prepare
, the statement creator.proc
db.proc
do
db.do
, execute a single DO statement.sqlexec
db.execute
, execute multiple SQL statements (None
is always returned)
pg_python Usage¶
Usage: postgresql.bin.pg_python [connection options] [script] …
- Options:
--unix=UNIX path to filesystem socket --ssl-mode=SSLMODE SSL requirement for connectivity: require, prefer, allow, disable -s SETTINGS, --setting=SETTINGS run-time parameters to set upon connecting -I PQ_IRI, --iri=PQ_IRI database locator string [pq://user:password@host:port/database?setting=value] -h HOST, --host=HOST database server host -p PORT, --port=PORT database server port -U USER, --username=USER user name to connect as -W, --password prompt for password -d DATABASE, --database=DATABASE database’s name --pq-trace=PQ_TRACE trace PQ protocol transmissions -C PYTHON_CONTEXT, --context=PYTHON_CONTEXT Python context code to run[file://,module:,<code>] -m PYTHON_MAIN Python module to run as script(__main__) -c PYTHON_MAIN Python expression to run(__main__) --version show program’s version number and exit --help show this help message and exit
Interactive Console Backslash Commands¶
Inspired by psql
:
>>> \?
Backslash Commands:
\? Show this help message.
\E Edit a file or a temporary script.
\e Edit and Execute the file directly in the context.
\i Execute a Python script within the interpreter's context.
\set Configure environment variables. \set without arguments to show all
\x Execute the Python command within this process.
pg_python Examples¶
Module execution taking advantage of the new built-ins:
$ python3 -m postgresql.bin.pg_python -h localhost -W -m timeit "prepare('SELECT 1').first()"
Password for pg_python[pq://dbusername@localhost:5432]:
1000 loops, best of 3: 1.35 msec per loop
$ python3 -m postgresql.bin.pg_python -h localhost -W -m timeit -s "ps=prepare('SELECT 1')" "ps.first()"
Password for pg_python[pq://dbusername@localhost:5432]:
1000 loops, best of 3: 442 usec per loop
Simple interactive usage:
$ python3 -m postgresql.bin.pg_python -h localhost -W
Password for pg_python[pq://dbusername@localhost:5432]:
>>> ps = prepare('select 1')
>>> ps.first()
1
>>> c = ps()
>>> c.read()
[(1,)]
>>> ps.close()
>>> import sys
>>> sys.exit(0)
postgresql.bin.pg_dotconf¶
pg_dotconf is used to modify a PostgreSQL cluster’s configuration file.
It provides a means to apply settings specified from the command line and from a
file referenced using the -f
option.
Warning
include
directives in configuration files are completely ignored. If
modification of an included file is desired, the command must be applied to
that specific file.
pg_dotconf Usage¶
Usage: postgresql.bin.pg_dotconf [–stdout] [-f filepath] postgresql.conf ([param=val]|[param])*
- Options:
--version show program’s version number and exit -h, --help show this help message and exit -f SETTINGS, --file=SETTINGS A file of settings to apply to the given “postgresql.conf” --stdout Redirect the product to standard output instead of writing back to the “postgresql.conf” file
Examples¶
Modifying a simple configuration file:
$ echo "setting = value" >pg.conf
# change 'setting'
$ python3 -m postgresql.bin.pg_dotconf pg.conf setting=newvalue
$ cat pg.conf
setting = 'newvalue'
# new settings are appended to the file
$ python3 -m postgresql.bin.pg_dotconf pg.conf another_setting=value
$ cat pg.conf
setting = 'newvalue'
another_setting = 'value'
# comment a setting
$ python3 -m postgresql.bin.pg_dotconf pg.conf another_setting
$ cat pg.conf
setting = 'newvalue'
#another_setting = 'value'
When a setting is given on the command line, it must been seen as one argument to the command, so it’s very important to avoid invocations like:
$ python3 -m postgresql.bin.pg_dotconf pg.conf setting = value
ERROR: invalid setting, '=' after 'setting'
HINT: Settings must take the form 'setting=value' or 'setting_name_to_comment'. Settings must also be received as a single argument.
Reference¶
postgresql
¶
py-postgresql is a Python package for using PostgreSQL. This includes low-level protocol tools, a driver(PG-API and DB-API 2.0), and cluster management tools.
See <http://postgresql.org> for more information about PostgreSQL and <http://python.org> for information about Python.
-
postgresql.
version
= '1.3.0'¶ The version string of py-postgresql.
-
postgresql.
version_info
= (1, 3, 0)¶ The version triple of py-postgresql: (major, minor, patch).
-
postgresql.
open
(iri=None, prompt_title=None, **kw)[source]¶ Create a postgresql.api.Connection to the server referenced by the given iri:
>>> import postgresql # General Format: >>> db = postgresql.open('pq://user:password@host:port/database') # Connect to 'postgres' at localhost. >>> db = postgresql.open('localhost/postgres')
Connection keywords can also be used with open. See the narratives for more information.
The prompt_title keyword is ignored. open will never prompt for the password unless it is explicitly instructed to do so.
(Note: “pq” is the name of the protocol used to communicate with PostgreSQL)
postgresql.api
¶
Application Programmer Interfaces for PostgreSQL.
postgresql.api
is a collection of Python APIs for the PostgreSQL DBMS. It
is designed to take full advantage of PostgreSQL’s features to provide the
Python programmer with substantial convenience.
This module is used to define “PG-API”. It creates a set of ABCs that makes up the basic interfaces used to work with a PostgreSQL server.
-
class
postgresql.api.
Message
[source]¶ Bases:
postgresql.python.element.Element
A message emitted by PostgreSQL. A message being a NOTICE, WARNING, INFO, etc.
-
code
¶ The SQL state code of the message.
-
details
¶ The additional details given with the message. Common keys should be the following:
- ‘severity’
- ‘context’
- ‘detail’
- ‘hint’
- ‘file’
- ‘line’
- ‘function’
- ‘position’
- ‘internal_position’
- ‘internal_query’
-
isconsistent
(other) → bool[source]¶ Whether the fields of the other Message object is consistent with the fields of self.
This must return the result of the comparison of code, source, message, and details.
This method is provided as the alternative to overriding equality; often, pointer equality is the desirable means for comparison, but equality of the fields is also necessary.
-
message
¶ The primary message string.
-
source
¶ Where the message originated from. Normally, ‘SERVER’, but sometimes ‘CLIENT’.
-
-
class
postgresql.api.
Statement
[source]¶ Bases:
postgresql.python.element.Element
Instances of Statement are returned by the prepare method of Database instances.
A Statement is an Iterable as well as Callable.
The Iterable interface is supported for queries that take no arguments at all. It allows the syntax:
>>> for x in db.prepare('select * FROM table'): ... pass
-
clone
() → postgresql.api.Statement[source]¶ Create a new statement object using the same factors as self.
When used for refreshing plans, the new clone should replace references to the original.
-
column_names
¶ The attribute names of the columns produced by the statement.
A sequence of str objects stating the column name:
['column1', 'column2', 'emp_name']
-
column_types
¶ The Python types of the columns produced by the statement.
A sequence of type objects:
[<class 'int'>, <class 'str'>]
-
parameter_types
¶ The Python types expected of parameters given to the statement.
A sequence of type objects:
[<class 'int'>, <class 'str'>]
-
pg_column_types
¶ The type Oids of the columns produced by the statement.
A sequence of int objects stating the SQL type name:
[27, 28]
-
pg_parameter_types
¶ The type Oids of the parameters required by the statement.
A sequence of int objects stating the PostgreSQL type Oid:
[27, 28]
-
sql_column_types
¶ The type of the columns produced by the statement.
A sequence of str objects stating the SQL type name:
['INTEGER', 'VARCHAR', 'INTERVAL']
-
sql_parameter_types
¶ The type of the parameters required by the statement.
A sequence of str objects stating the SQL type name:
['INTEGER', 'VARCHAR', 'INTERVAL']
-
statement_id
¶ The statment’s identifier.
-
string
¶ The SQL string of the prepared statement.
None if not available. This can happen in cases where a statement is prepared on the server and a reference to the statement is sent to the client which subsequently uses the statement via the Database’s statement constructor.
-
-
class
postgresql.api.
Cursor
[source]¶ Bases:
postgresql.api.Result
A Cursor object is an interface to a sequence of tuples(rows). A result set. Cursors publish a file-like interface for reading tuples from a cursor declared on the database.
Cursor objects are created by invoking the Statement.declare method or by opening a cursor using an identifier via the Database.cursor_from_id method.
-
direction
¶ The default direction argument for read().
When True reads are FORWARD. When False reads are BACKWARD.
Cursor operation option.
-
read
(quantity=None, direction=None) → ['Row'][source]¶ Read, fetch, the specified number of rows and return them in a list. If quantity is None, all records will be fetched.
direction can be used to override the default configured direction.
This alters the cursor’s position.
Read does not directly correlate to FETCH. If zero is given as the quantity, an empty sequence must be returned.
-
seek
(offset, whence='ABSOLUTE')[source]¶ Set the cursor’s position to the given offset with respect to the whence parameter and the configured direction.
Whence values:
0
or"ABSOLUTE"
- Absolute.
1
or"RELATIVE"
- Relative.
2
or"FROM_END"
- Absolute from end.
3
or"FORWARD"
- Relative forward.
4
or"BACKWARD"
- Relative backward.
Direction effects whence. If direction is BACKWARD, ABSOLUTE positioning will effectively be FROM_END, RELATIVE’s position will be negated, and FROM_END will effectively be ABSOLUTE.
-
-
class
postgresql.api.
Connector
(user: str = None, password: str = None, database: str = None, settings: (<class 'dict'>, [(<class 'str'>, <class 'str'>)]) = None, category: postgresql.api.Category = None)[source]¶ Bases:
postgresql.python.element.Element
A connector is an object providing the necessary information to establish a connection. This includes credentials, database settings, and many times addressing information.
-
class
postgresql.api.
Category
[source]¶ Bases:
postgresql.python.element.Element
A category is an object that initializes the subject connection for a specific purpose.
Arguably, a runtime class for use with connections.
-
class
postgresql.api.
Database
[source]¶ Bases:
postgresql.python.element.Element
The interface to an individual database. Connection objects inherit from this
-
backend_id
¶ The backend’s process identifier.
-
client_address
¶ The client address that the server sees. This is obtainable by querying the
pg_catalog.pg_stat_activity
relation.None if unavailable.
-
client_port
¶ The client port that the server sees. This is obtainable by querying the
pg_catalog.pg_stat_activity
relation.None if unavailable.
-
cursor_from_id
(cursor_id) → postgresql.api.Cursor[source]¶ Create a Cursor object from the given cursor_id that was already declared on the server.
Cursor objects created this way must not be closed when the object is garbage collected. Rather, the user must explicitly close it for the server resources to be released. This is in contrast to Cursor objects that are created by invoking a Statement or a SRF StoredProcedure.
-
do
(source) → None[source]¶ Execute a DO statement using the given language and source. Always returns None.
Likely to be a function of Connection.execute.
-
execute
() → None[source]¶ Execute an arbitrary block of SQL. Always returns None and raise an exception on error.
-
iternotifies
(timeout=None) → collections.abc.Iterator[source]¶ Return an iterator to the notifications received by the connection. The iterator must produce triples in the form
(channel, payload, pid)
.If timeout is not None, None must be emitted at the specified timeout interval. If the timeout is zero, all the pending notifications must be yielded by the iterator and then StopIteration must be raised.
If the connection is closed for any reason, the iterator must silently stop by raising StopIteration. Further error control is then the responsibility of the user.
-
listen
(*channels) → None[source]¶ Start listening to the given channels.
Equivalent to issuing “LISTEN <x>” for x in channels.
-
listening_channels
() → ['channel name', Ellipsis][source]¶ Return an iterator to all the channels currently being listened to.
-
notify
(*channels, **channel_and_payload) → int[source]¶ NOTIFY the channels with the given payload.
Equivalent to issuing “NOTIFY <channel>” or “NOTIFY <channel>, <payload>” for each item in channels and channel_and_payload. All NOTIFYs issued must occur in the same transaction.
The items in channels can either be a string or a tuple. If a string, no payload is given, but if an item is a builtins.tuple, the second item will be given as the payload. channels offers a means to issue NOTIFYs in guaranteed order.
The items in channel_and_payload are all payloaded NOTIFYs where the keys are the channels and the values are the payloads. Order is undefined.
-
prepare
(sql: str) → postgresql.api.Statement[source]¶ Create a new Statement instance bound to the connection using the given SQL.
>>> s = db.prepare("SELECT 1") >>> c = s() >>> c.next() (1,)
-
proc
(procedure_id) → postgresql.api.StoredProcedure[source]¶ Create a StoredProcedure instance using the given identifier.
The proc_id given can be either an
Oid
, or aregprocedure
that identifies the stored procedure to create the interface for.>>> p = db.proc('version()') >>> p() 'PostgreSQL 8.3.0' >>> qstr = "select oid from pg_proc where proname = 'generate_series'" >>> db.prepare(qstr).first() 1069 >>> generate_series = db.proc(1069) >>> list(generate_series(1,5)) [1, 2, 3, 4, 5]
-
query
(sql: str, *args) → postgresql.api.Execution[source]¶ Prepare and execute the statement, sql, with the given arguments. Equivalent to
db.prepare(sql)(*args)
.
-
reset
() → None[source]¶ Reset the connection into it’s original state.
Issues a
RESET ALL
to the database. If the database supports removing temporary tables created in the session, then remove them. Reapply initial configuration settings such as path.The purpose behind this method is to provide a soft-reconnect method that re-initializes the connection into its original state. One obvious use of this would be in a connection pool where the connection is being recycled.
-
settings
¶ A Settings instance bound to the Database.
-
statement_from_id
(statement_id) → postgresql.api.Statement[source]¶ Create a Statement object that was already prepared on the server. The distinction between this and a regular query is that it must be explicitly closed if it is no longer desired, and it is instantiated using the statement identifier as opposed to the SQL statement itself.
-
unlisten
(*channels) → None[source]¶ Stop listening to the given channels.
Equivalent to issuing “UNLISTEN <x>” for x in channels.
-
version_info
¶ A version tuple of the database software similar Python’s sys.version_info.
>>> db.version_info (8, 1, 3, '', 0)
-
xact
¶ Create a Transaction object using the given keyword arguments as its configuration.
-
-
class
postgresql.api.
Connection
[source]¶ Bases:
postgresql.api.Database
The interface to a connection to a PostgreSQL database. This is a Database interface with the additional connection management tools that are particular to using a remote database.
-
clone
() → postgresql.api.Connection[source]¶ Create another connection using the same factors as self. The returned object should be open and ready for use.
-
closed
¶ True if the Connection is closed, False if the Connection is open.
>>> db.closed True
-
-
class
postgresql.api.
Transaction
[source]¶ Bases:
postgresql.python.element.Element
A Tranaction is an element that represents a transaction in the session. Once created, it’s ready to be started, and subsequently committed or rolled back.
Read-only transaction:
>>> with db.xact(mode = 'read only'): ... ...
Read committed isolation:
>>> with db.xact(isolation = 'READ COMMITTED'): ... ...
Savepoints are created if inside a transaction block:
>>> with db.xact(): ... with db.xact(): ... ...
-
abort
() → None¶ Abort the transaction.
If the transaction is a savepoint, ROLLBACK TO the savepoint identifier.
If the transaction is a transaction block, issue an ABORT.
If the transaction has already been aborted, do nothing.
-
begin
() → None¶ Start the transaction.
If the database is in a transaction block, the transaction should be configured as a savepoint. If any transaction block configuration was applied to the transaction, raise a postgresql.exceptions.OperationError.
If the database is not in a transaction block, start one using the configuration where:
self.isolation specifies the
ISOLATION LEVEL
. Normally,READ COMMITTED
,SERIALIZABLE
, orREAD UNCOMMITTED
.self.mode specifies the mode of the transaction. Normally,
READ ONLY
orREAD WRITE
.If the transaction is already open, do nothing.
If the transaction has been committed or aborted, raise an postgresql.exceptions.OperationError.
-
commit
() → None[source]¶ Commit the transaction.
If the transaction is a block, issue a COMMIT statement.
If the transaction was started inside a transaction block, it should be identified as a savepoint, and the savepoint should be released.
If the transaction has already been committed, do nothing.
-
isolation
¶ The isolation level of the transaction block:
START TRANSACTION <isolation> [MODE];The isolation property is a string and will be directly interpolated into the START TRANSACTION statement.
-
mode
¶ The mode of the transaction block:
START TRANSACTION [ISOLATION] <mode>;The mode property is a string and will be directly interpolated into the START TRANSACTION statement.
-
rollback
() → None[source]¶ Abort the transaction.
If the transaction is a savepoint, ROLLBACK TO the savepoint identifier.
If the transaction is a transaction block, issue an ABORT.
If the transaction has already been aborted, do nothing.
-
start
() → None[source]¶ Start the transaction.
If the database is in a transaction block, the transaction should be configured as a savepoint. If any transaction block configuration was applied to the transaction, raise a postgresql.exceptions.OperationError.
If the database is not in a transaction block, start one using the configuration where:
self.isolation specifies the
ISOLATION LEVEL
. Normally,READ COMMITTED
,SERIALIZABLE
, orREAD UNCOMMITTED
.self.mode specifies the mode of the transaction. Normally,
READ ONLY
orREAD WRITE
.If the transaction is already open, do nothing.
If the transaction has been committed or aborted, raise an postgresql.exceptions.OperationError.
-
-
class
postgresql.api.
Settings
[source]¶ Bases:
postgresql.python.element.Element
A mapping interface to the session’s settings. This provides a direct interface to
SHOW
orSET
commands. Identifiers and values need not be quoted specially as the implementation must do that work for the user.-
get
(key, default=None)[source]¶ Get the setting with the corresponding key. If the setting does not exist, return the default.
-
-
class
postgresql.api.
StoredProcedure
[source]¶ Bases:
postgresql.python.element.Element
A function stored on the database.
-
class
postgresql.api.
Driver
[source]¶ Bases:
postgresql.python.element.Element
The Driver element provides the Connector and other information pertaining to the implementation of the driver. Information about what the driver supports is available in instances.
-
class
postgresql.api.
Installation
[source]¶ Bases:
postgresql.python.element.Element
Interface to a PostgreSQL installation. Instances would provide various information about an installation of PostgreSQL accessible by the Python
-
ssl
¶ Whether the installation supports SSL.
-
type
¶ The “type” of PostgreSQL. Normally, the first component of the string returned by pg_config.
-
version
¶ A version string consistent with what SELECT version() would output.
-
version_info
¶ A tuple specifying the version in a form similar to Python’s sys.version_info. (8, 3, 3, ‘final’, 0)
See postgresql.versionstring.
-
-
class
postgresql.api.
Cluster
[source]¶ Bases:
postgresql.python.element.Element
Interface to a PostgreSQL cluster–a data directory. An implementation of this provides a means to control a server.
-
data_directory
¶ The path to the data directory of the cluster.
-
init
(initdb=None, user=None, password=None, encoding=None, locale=None, collate=None, ctype=None, monetary=None, numeric=None, time=None, text_search_config=None, xlogdir=None)[source]¶ Create the cluster at the data_directory associated with the Cluster instance.
-
installation
¶ The installation used by the cluster.
-
settings
¶ A Settings interface to the
postgresql.conf
file associated with the cluster.
-
postgresql.sys
¶
py-postgresql system functions and data.
Data¶
libpath
- The local file system paths that contain query libraries.
Overridable Functions¶
- excformat
- Information that makes up an exception’s displayed “body”. Effectively, the implementation of postgresql.exception.Error.__str__
- msghook
- Display a message.
-
postgresql.sys.
default_msghook
(msg, format_message=<function format_element>)[source]¶ Built-in message hook. DON’T TOUCH!
-
postgresql.sys.
errformat
(*args, **kw)[source]¶ Raised Database Error formatter pointing to default_excformat.
Override if you like. All postgresql.exceptions.Error’s are formatted using this function.
-
postgresql.sys.
msghook
(*args, **kw)[source]¶ Message hook pointing to default_msghook.
Override if you like. All untrapped messages raised by driver connections come here to be printed to stderr.
postgresql.string
¶
String split and join operations for dealing with literals and identifiers.
Notably, the functions in this module are intended to be used for simple use-cases. It attempts to stay away from “real” parsing and simply provides functions for common needs, like the ability to identify unquoted portions of a query string so that logic or transformations can be applied to only unquoted portions. Scanning for statement terminators, or safely interpolating identifiers.
All functions deal with strict quoting rules.
-
postgresql.string.
quote_ident
(text)[source]¶ Replace every instance of ‘”’ with ‘””’ and place ‘”’ on each end.
-
postgresql.string.
quote_ident_if_needed
(text)[source]¶ If needed, replace every instance of ‘”’ with ‘””’ and place ‘”’ on each end. Otherwise, just return the text.
-
postgresql.string.
quote_literal
(text)[source]¶ Escape the literal and wrap it in [single] quotations.
-
postgresql.string.
split
(text)[source]¶ split the string up by into non-quoted and quoted portions. Zero and even numbered indexes are unquoted portions, while odd indexes are quoted portions.
Unquoted portions are regular strings, whereas quoted portions are pair-tuples specifying the quotation mechanism and the content thereof.
>>> list(split("select $$foobar$$")) ['select ', ('$$', 'foobar'), '']
If the split ends on a quoted section, it means the string’s quote was not terminated. Subsequently, there will be an even number of objects in the list.
Quotation errors are detected, but never raised. Rather it’s up to the user to identify the best course of action for the given split.
-
postgresql.string.
split_ident
(text, sep=', ', quote='"', maxsplit=-1)[source]¶ Split a series of identifiers using the specified separator.
-
postgresql.string.
split_qname
(text, maxsplit=-1)[source]¶ Call to .split_ident() with a ‘.’ sep parameter.
-
postgresql.string.
split_sql
(sql, sep=';')[source]¶ Given SQL, safely split using the given separator. Notably, this yields fully split text. This should be used instead of split_sql_str() when quoted sections need be still be isolated.
>>> list(split_sql('select $$1$$ AS "foo;"; select 2;')) [['select ', ('$$', '1'), ' AS ', ('"', 'foo;'), ''], (' select 2',), ['']]
-
postgresql.string.
split_sql_str
(sql, sep=';')[source]¶ Identical to split_sql but yields unsplit text.
>>> list(split_sql_str('select $$1$$ AS "foo;"; select 2;')) ['select $$1$$ AS "foo;"', ' select 2', '']
postgresql.exceptions
¶
PostgreSQL exceptions and warnings with associated state codes.
The primary entry points of this module is the ErrorLookup function and the WarningLookup function. Given an SQL state code, they give back the most appropriate Error or Warning subclass.
- For more information on error codes see:
- http://www.postgresql.org/docs/current/static/errcodes-appendix.html
This module is executable via -m: python -m postgresql.exceptions. It provides a convenient way to look up the exception object mapped to by the given error code:
$ python -m postgresql.exceptions XX000
postgresql.exceptions.InternalError [XX000]
If the exact error code is not found, it will try to find the error class’s exception(The first two characters of the error code make up the class identity):
$ python -m postgresql.exceptions XX400
postgresql.exceptions.InternalError [XX000]
If that fails, it will return postgresql.exceptions.Error
-
exception
postgresql.exceptions.
ActiveTransactionError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
AdminShutdownError
(message, code=None, details={}, source=None, creator=None)[source]¶ Bases:
postgresql.exceptions.OIError
,postgresql.exceptions.Disconnection
-
exception
postgresql.exceptions.
AmbiguityError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
AmbiguousAliasError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
AmbiguousColumnError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
AmbiguousFunctionError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
AmbiguousParameterError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
ArrayElementError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
AssignmentError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
AuthenticationMethodError
(message, code=None, details={}, source=None, creator=None)[source]¶ Bases:
postgresql.exceptions.DriverError
,postgresql.exceptions.Disconnection
Server requested an authentication method that is not supported by the driver.
-
exception
postgresql.exceptions.
AuthenticationSpecificationError
(message, code=None, details={}, source=None, creator=None)[source]¶ Bases:
postgresql.exceptions.Error
,postgresql.exceptions.Disconnection
-
exception
postgresql.exceptions.
BadAccessModeForBranchError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
BadCopyError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
BadIsolationForBranchError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
BinaryRepresentationError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
BranchAlreadyActiveError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
CFError
(message, code=None, details={}, source=None, creator=None)[source]¶ Bases:
postgresql.exceptions.Error
Configuration File Error.
-
exception
postgresql.exceptions.
CardinalityError
(message, code=None, details={}, source=None, creator=None)[source]¶ Bases:
postgresql.exceptions.Error
Wrong number of rows returned.
-
exception
postgresql.exceptions.
CaseNotFoundError
(message, code=None, details={}, source=None, creator=None)[source]¶ Bases:
postgresql.exceptions.Error
-
exception
postgresql.exceptions.
CastCharacterValueError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
CatalogNameError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
CheckError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
ClientCannotConnectError
(message, code=None, details={}, source=None, creator=None)[source]¶ Bases:
postgresql.exceptions.ConnectionError
Client was unable to establish a connection to the server.
-
exception
postgresql.exceptions.
CoercionError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
ColumnDefinitionError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
ColumnError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
ColumnReferenceError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
CompositeError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
ConnectTimeoutError
(message, code=None, details={}, source=None, creator=None)[source]¶ Bases:
postgresql.exceptions.DriverError
,postgresql.exceptions.Disconnection
Client was unable to esablish a connection in the given time.
-
exception
postgresql.exceptions.
ConnectionDoesNotExistError
(message, code=None, details={}, source=None, creator=None)[source]¶ Bases:
postgresql.exceptions.ConnectionError
The connection is closed or was never connected.
-
exception
postgresql.exceptions.
ConnectionError
(message, code=None, details={}, source=None, creator=None)[source]¶ Bases:
postgresql.exceptions.Error
,postgresql.exceptions.Disconnection
-
exception
postgresql.exceptions.
ConnectionFailureError
(message, code=None, details={}, source=None, creator=None)[source]¶ Bases:
postgresql.exceptions.ConnectionError
Raised when a connection is dropped.
-
exception
postgresql.exceptions.
ConnectionRejectionError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
ContainingSQLNotPermittedError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
CrashShutdownError
(message, code=None, details={}, source=None, creator=None)[source]¶ Bases:
postgresql.exceptions.OIError
,postgresql.exceptions.Disconnection
-
exception
postgresql.exceptions.
CursorDefinitionError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
CursorNameError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
CursorStateError
(message, code=None, details={}, source=None, creator=None)[source]¶ Bases:
postgresql.exceptions.Error
-
exception
postgresql.exceptions.
DPDSEError
(message, code=None, details={}, source=None, creator=None)[source]¶ Bases:
postgresql.exceptions.Error
Dependent Privilege Descriptors Still Exist.
-
exception
postgresql.exceptions.
DPDSEObjectError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
DataCorruptedError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
DataError
(message, code=None, details={}, source=None, creator=None)[source]¶ Bases:
postgresql.exceptions.Error
-
exception
postgresql.exceptions.
DataModificationProhibitedError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
DatabaseDefinitionError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
DateTimeFieldOverflowError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
DateTimeFormatError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
DeadlockError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
DefinitionError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
class
postgresql.exceptions.
DeprecationWarning
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
Disconnection
[source]¶ Bases:
postgresql.exceptions.Exception
Exception identifying errors that result in disconnection.
-
exception
postgresql.exceptions.
DiskFullError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
DriverError
(message, code=None, details={}, source=None, creator=None)[source]¶ Bases:
postgresql.exceptions.Error
Errors originating in the driver’s implementation.
-
class
postgresql.exceptions.
DriverWarning
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
DuplicateAliasError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
DuplicateColumnError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
DuplicateCursorError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
DuplicateDatabaseError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
DuplicateError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
DuplicateFileError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
DuplicateFunctionError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
DuplicateObjectError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
DuplicatePreparedStatementError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
DuplicateSchemaError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
DuplicateTableError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
class
postgresql.exceptions.
DynamicResultSetsReturnedWarning
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
EREError
(message, code=None, details={}, source=None, creator=None)[source]¶ Bases:
postgresql.exceptions.Error
External Routine Exception.
-
exception
postgresql.exceptions.
ERIEError
(message, code=None, details={}, source=None, creator=None)[source]¶ Bases:
postgresql.exceptions.Error
External Routine Invocation Exception.
-
exception
postgresql.exceptions.
EncodingError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
Error
(message, code=None, details={}, source=None, creator=None)[source]¶ Bases:
postgresql.message.Message
,postgresql.exceptions.Exception
A PostgreSQL Error.
-
exception
postgresql.exceptions.
EscapeCharacterConflictError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
EscapeCharacterError
(message, code=None, details={}, source=None, creator=None)[source]¶ Bases:
postgresql.exceptions.DataError
Invalid escape character.
-
exception
postgresql.exceptions.
EscapeOctetError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
EscapeSequenceError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
Exception
[source]¶ Bases:
Exception
Base PostgreSQL exception class.
-
exception
postgresql.exceptions.
FeatureError
(message, code=None, details={}, source=None, creator=None)[source]¶ Bases:
postgresql.exceptions.Error
“Unsupported feature.
-
exception
postgresql.exceptions.
FloatingPointError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
ForeignKeyCreationError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
ForeignKeyError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
FunctionDefinitionError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
FunctionExecutedNoReturnStatementError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
GrantorError
(message, code=None, details={}, source=None, creator=None)[source]¶ Bases:
postgresql.exceptions.Error
-
exception
postgresql.exceptions.
GrantorOperationError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
GroupingError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
ICVError
(message, code=None, details={}, source=None, creator=None)[source]¶ Bases:
postgresql.exceptions.Error
Integrity Contraint Violation.
-
exception
postgresql.exceptions.
IRError
(message, code=None, details={}, source=None, creator=None)[source]¶ Bases:
postgresql.exceptions.Error
Insufficient Resource Error.
-
exception
postgresql.exceptions.
ITSError
(message, code=None, details={}, source=None, creator=None)[source]¶ Bases:
postgresql.exceptions.TransactionError
Invalid Transaction State.
-
class
postgresql.exceptions.
IgnoredClientParameterWarning
(message, code=None, details={}, source=None, creator=None)[source]¶ Bases:
postgresql.exceptions.DriverWarning
Warn the user of a valid, but ignored parameter.
-
exception
postgresql.exceptions.
ImmutableRuntimeParameterError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
class
postgresql.exceptions.
ImplicitZeroBitPaddingWarning
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
InFailedTransactionError
(message, code=None, details={}, source=None, creator=None)[source]¶ Bases:
postgresql.exceptions.ITSError
Occurs when an action occurs in a failed transaction.
-
exception
postgresql.exceptions.
InconsistentCursorIsolationError
(message, code=None, details={}, source=None, creator=None)[source]¶ Bases:
postgresql.exceptions.ITSError
The held cursor requires the same isolation.
-
exception
postgresql.exceptions.
IndeterminateTypeError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
IndexCorruptedError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
IndicatorOverflowError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
IndicatorParameterValueError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
InsecurityError
(message, code=None, details={}, source=None, creator=None)[source]¶ Bases:
postgresql.exceptions.DriverError
,postgresql.exceptions.Disconnection
Error signifying a secure channel to a server cannot be established.
-
exception
postgresql.exceptions.
InsufficientPrivilegeError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
IntegrityConstraintViolationError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
InternalError
(message, code=None, details={}, source=None, creator=None)[source]¶ Bases:
postgresql.exceptions.Error
-
exception
postgresql.exceptions.
IntervalFieldOverflowError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
InvalidSQLState
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
InvalidSavepointSpecificationError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
LimitValueError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
LoadError
[source]¶ Bases:
postgresql.exceptions.Exception
Failed to load a library.
-
exception
postgresql.exceptions.
LocatorError
(message, code=None, details={}, source=None, creator=None)[source]¶ Bases:
postgresql.exceptions.Error
-
exception
postgresql.exceptions.
LocatorSpecificationError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
LockFileExistsError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
LogArgumentError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
MemoryError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
ModifyingSQLDataNotPermittedError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
NameError
(message, code=None, details={}, source=None, creator=None)[source]¶ Bases:
postgresql.exceptions.Error
-
exception
postgresql.exceptions.
NameTooLongError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
NoActiveTransactionError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
NoActiveTransactionForBranchError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
class
postgresql.exceptions.
NoDataWarning
(message, code=None, details={}, source=None, creator=None)[source]¶
-
class
postgresql.exceptions.
NoMoreSetsReturned
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
NonstandardUseOfEscapeCharacterError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
NotNullError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
NotXMLError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
class
postgresql.exceptions.
NullValueEliminatedInSetFunctionWarning
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
NullValueNoIndicatorParameter
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
NullValueNotAllowed
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
NullValueNotAllowedError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
NumericRangeError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
OIError
(message, code=None, details={}, source=None, creator=None)[source]¶ Bases:
postgresql.exceptions.Error
Operator Intervention.
-
exception
postgresql.exceptions.
ONIPSError
(message, code=None, details={}, source=None, creator=None)[source]¶ Bases:
postgresql.exceptions.Error
Object Not In Prerequisite State.
-
exception
postgresql.exceptions.
ObjectDefinitionError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
ObjectInUseError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
OffsetValueError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
OperationError
(message, code=None, details={}, source=None, creator=None)[source]¶ Bases:
postgresql.exceptions.DriverError
An invalid operation on an interface element.
-
exception
postgresql.exceptions.
PLPGSQLError
(message, code=None, details={}, source=None, creator=None)[source]¶ Bases:
postgresql.exceptions.Error
Error raised by a PL/PgSQL procedural function.
-
exception
postgresql.exceptions.
PLPGSQLNoDataFoundError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
PLPGSQLRaiseError
(message, code=None, details={}, source=None, creator=None)[source]¶ Bases:
postgresql.exceptions.PLPGSQLError
Error raised by a PL/PgSQL RAISE statement.
-
exception
postgresql.exceptions.
PLPGSQLTooManyRowsError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
ParameterError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
ParameterValueError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
PowerFunctionArgumentError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
PreparedStatementDefinitionError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
class
postgresql.exceptions.
PrivilegeNotGrantedWarning
(message, code=None, details={}, source=None, creator=None)[source]¶
-
class
postgresql.exceptions.
PrivilegeNotRevokedWarning
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
ProhibitedSQLStatementError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
ProtocolError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
QueryCanceledError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
ReadOnlyTransactionError
(message, code=None, details={}, source=None, creator=None)[source]¶ Bases:
postgresql.exceptions.ITSError
Occurs when an alteration occurs in a read-only transaction.
-
exception
postgresql.exceptions.
ReadingDataProhibitedError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
ReadingSQLDataNotPermittedError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
RecursionError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
RegularExpressionError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
ReservedNameError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
RestrictError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
RoleSpecificationError
(message, code=None, details={}, source=None, creator=None)[source]¶ Bases:
postgresql.exceptions.Error
-
exception
postgresql.exceptions.
SEARVError
(message, code=None, details={}, source=None, creator=None)[source]¶ Bases:
postgresql.exceptions.Error
Syntax Error or Access Rule Violation.
-
exception
postgresql.exceptions.
SEARVNameError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
SIOError
(message, code=None, details={}, source=None, creator=None)[source]¶ Bases:
postgresql.exceptions.Error
System I/O.
-
exception
postgresql.exceptions.
SQLNotYetCompleteError
(message, code=None, details={}, source=None, creator=None)[source]¶ Bases:
postgresql.exceptions.Error
-
exception
postgresql.exceptions.
SREError
(message, code=None, details={}, source=None, creator=None)[source]¶ Bases:
postgresql.exceptions.Error
SQL Routine Exception.
-
exception
postgresql.exceptions.
SRFProtocolError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
SavepointError
(message, code=None, details={}, source=None, creator=None)[source]¶ Bases:
postgresql.exceptions.TransactionError
Classification error designating errors that relate to savepoints.
-
exception
postgresql.exceptions.
SchemaAndDataStatementsError
(message, code=None, details={}, source=None, creator=None)[source]¶ Bases:
postgresql.exceptions.ITSError
Mixed schema and data statements not allowed.
-
exception
postgresql.exceptions.
SchemaDefinitionError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
SchemaNameError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
SerializationError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
ServerNotReadyError
(message, code=None, details={}, source=None, creator=None)[source]¶ Bases:
postgresql.exceptions.OIError
,postgresql.exceptions.Disconnection
Thrown when a connection is established to a server that is still starting up.
-
exception
postgresql.exceptions.
SpecificTypeMismatch
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
StatementCompletionUnknownError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
StatementNameError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
StatementProhibitedError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
StringDataLengthError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
class
postgresql.exceptions.
StringDataRightTruncationWarning
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
StringRightTruncationError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
SubstringError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
SyntaxError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
TRError
(message, code=None, details={}, source=None, creator=None)[source]¶ Bases:
postgresql.exceptions.TransactionError
Transaction Rollback.
-
exception
postgresql.exceptions.
TableDefinitionError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
TextRepresentationError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
TimeZoneDisplacementValueError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
TooManyConnectionsError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
TransactionError
(message, code=None, details={}, source=None, creator=None)[source]¶ Bases:
postgresql.exceptions.Error
-
exception
postgresql.exceptions.
TransactionInitiationError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
TransactionResolutionUnknownError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
TransactionTerminationError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
TriggerProtocolError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
TriggeredActionError
(message, code=None, details={}, source=None, creator=None)[source]¶ Bases:
postgresql.exceptions.Error
-
exception
postgresql.exceptions.
TriggeredDataChangeViolation
(message, code=None, details={}, source=None, creator=None)[source]¶ Bases:
postgresql.exceptions.Error
-
exception
postgresql.exceptions.
TrimError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
class
postgresql.exceptions.
TypeConversionWarning
(message, code=None, details={}, source=None, creator=None)[source]¶ Bases:
postgresql.exceptions.DriverWarning
Report a potential issue with a conversion.
-
exception
postgresql.exceptions.
TypeError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
TypeIOError
(message, code=None, details={}, source=None, creator=None)[source]¶ Bases:
postgresql.exceptions.DriverError
Driver failed to pack or unpack a tuple.
-
exception
postgresql.exceptions.
TypeMismatchError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
UndefinedColumnError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
UndefinedError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
UndefinedFileError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
UndefinedFunctionError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
UndefinedObjectError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
UndefinedParameterError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
UndefinedTableError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
UniqueError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
UnterminatedCStringError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
UntranslatableCharacterError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
class
postgresql.exceptions.
Warning
(message, code=None, details={}, source=None, creator=None)[source]¶ Bases:
postgresql.message.Message
-
exception
postgresql.exceptions.
WidthBucketFunctionArgumentError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
WindowError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
WithCheckOptionError
(message, code=None, details={}, source=None, creator=None)[source]¶ Bases:
postgresql.exceptions.Error
-
exception
postgresql.exceptions.
WrongObjectTypeError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
XMLCommentError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
XMLContentError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
XMLDocumentError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
XMLProcessingInstructionError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
ZeroDivisionError
(message, code=None, details={}, source=None, creator=None)[source]¶
-
exception
postgresql.exceptions.
ZeroLengthString
(message, code=None, details={}, source=None, creator=None)[source]¶
-
postgresql.exceptions.
map_errors_and_warnings
(objs, error_container={'': <class 'postgresql.exceptions.Error'>, '--000': <class 'postgresql.exceptions.DriverError'>, '--AUT': <class 'postgresql.exceptions.AuthenticationMethodError'>, '--CIO': <class 'postgresql.exceptions.ColumnError'>, '--OPE': <class 'postgresql.exceptions.OperationError'>, '--PIO': <class 'postgresql.exceptions.ParameterError'>, '--SEC': <class 'postgresql.exceptions.InsecurityError'>, '--TIO': <class 'postgresql.exceptions.TypeIOError'>, '--TOE': <class 'postgresql.exceptions.ConnectTimeoutError'>, '--cIO': <class 'postgresql.exceptions.CompositeError'>, '03000': <class 'postgresql.exceptions.SQLNotYetCompleteError'>, '08000': <class 'postgresql.exceptions.ConnectionError'>, '08001': <class 'postgresql.exceptions.ClientCannotConnectError'>, '08003': <class 'postgresql.exceptions.ConnectionDoesNotExistError'>, '08004': <class 'postgresql.exceptions.ConnectionRejectionError'>, '08006': <class 'postgresql.exceptions.ConnectionFailureError'>, '08007': <class 'postgresql.exceptions.TransactionResolutionUnknownError'>, '08P01': <class 'postgresql.exceptions.ProtocolError'>, '09000': <class 'postgresql.exceptions.TriggeredActionError'>, '0A000': <class 'postgresql.exceptions.FeatureError'>, '0B000': <class 'postgresql.exceptions.TransactionInitiationError'>, '0F000': <class 'postgresql.exceptions.LocatorError'>, '0F001': <class 'postgresql.exceptions.LocatorSpecificationError'>, '0L000': <class 'postgresql.exceptions.GrantorError'>, '0LP01': <class 'postgresql.exceptions.GrantorOperationError'>, '0P000': <class 'postgresql.exceptions.RoleSpecificationError'>, '20000': <class 'postgresql.exceptions.CaseNotFoundError'>, '21000': <class 'postgresql.exceptions.CardinalityError'>, '22000': <class 'postgresql.exceptions.DataError'>, '22001': <class 'postgresql.exceptions.StringRightTruncationError'>, '22002': <class 'postgresql.exceptions.NullValueNoIndicatorParameter'>, '22003': <class 'postgresql.exceptions.NumericRangeError'>, '22004': <class 'postgresql.exceptions.NullValueNotAllowedError'>, '22005': <class 'postgresql.exceptions.AssignmentError'>, '22007': <class 'postgresql.exceptions.DateTimeFormatError'>, '22008': <class 'postgresql.exceptions.DateTimeFieldOverflowError'>, '22009': <class 'postgresql.exceptions.TimeZoneDisplacementValueError'>, '2200B': <class 'postgresql.exceptions.EscapeCharacterConflictError'>, '2200C': <class 'postgresql.exceptions.EscapeCharacterError'>, '2200D': <class 'postgresql.exceptions.EscapeOctetError'>, '2200F': <class 'postgresql.exceptions.ZeroLengthString'>, '2200G': <class 'postgresql.exceptions.SpecificTypeMismatch'>, '2200L': <class 'postgresql.exceptions.NotXMLError'>, '2200M': <class 'postgresql.exceptions.XMLDocumentError'>, '2200N': <class 'postgresql.exceptions.XMLContentError'>, '2200S': <class 'postgresql.exceptions.XMLCommentError'>, '2200T': <class 'postgresql.exceptions.XMLProcessingInstructionError'>, '22010': <class 'postgresql.exceptions.IndicatorParameterValueError'>, '22011': <class 'postgresql.exceptions.SubstringError'>, '22012': <class 'postgresql.exceptions.ZeroDivisionError'>, '22015': <class 'postgresql.exceptions.IntervalFieldOverflowError'>, '22018': <class 'postgresql.exceptions.CastCharacterValueError'>, '2201B': <class 'postgresql.exceptions.RegularExpressionError'>, '2201E': <class 'postgresql.exceptions.LogArgumentError'>, '2201F': <class 'postgresql.exceptions.PowerFunctionArgumentError'>, '2201G': <class 'postgresql.exceptions.WidthBucketFunctionArgumentError'>, '2201W': <class 'postgresql.exceptions.LimitValueError'>, '2201X': <class 'postgresql.exceptions.OffsetValueError'>, '22020': <class 'postgresql.exceptions.LimitValueError'>, '22021': <class 'postgresql.exceptions.EncodingError'>, '22022': <class 'postgresql.exceptions.IndicatorOverflowError'>, '22023': <class 'postgresql.exceptions.ParameterValueError'>, '22024': <class 'postgresql.exceptions.UnterminatedCStringError'>, '22025': <class 'postgresql.exceptions.EscapeSequenceError'>, '22026': <class 'postgresql.exceptions.StringDataLengthError'>, '22027': <class 'postgresql.exceptions.TrimError'>, '2202E': <class 'postgresql.exceptions.ArrayElementError'>, '22P01': <class 'postgresql.exceptions.FloatingPointError'>, '22P02': <class 'postgresql.exceptions.TextRepresentationError'>, '22P03': <class 'postgresql.exceptions.BinaryRepresentationError'>, '22P04': <class 'postgresql.exceptions.BadCopyError'>, '22P05': <class 'postgresql.exceptions.UntranslatableCharacterError'>, '22P06': <class 'postgresql.exceptions.NonstandardUseOfEscapeCharacterError'>, '23000': <class 'postgresql.exceptions.ICVError'>, '23001': <class 'postgresql.exceptions.RestrictError'>, '23502': <class 'postgresql.exceptions.NotNullError'>, '23503': <class 'postgresql.exceptions.ForeignKeyError'>, '23505': <class 'postgresql.exceptions.UniqueError'>, '23514': <class 'postgresql.exceptions.CheckError'>, '24000': <class 'postgresql.exceptions.CursorStateError'>, '25000': <class 'postgresql.exceptions.ITSError'>, '25001': <class 'postgresql.exceptions.ActiveTransactionError'>, '25002': <class 'postgresql.exceptions.BranchAlreadyActiveError'>, '25003': <class 'postgresql.exceptions.BadAccessModeForBranchError'>, '25004': <class 'postgresql.exceptions.BadIsolationForBranchError'>, '25005': <class 'postgresql.exceptions.NoActiveTransactionForBranchError'>, '25006': <class 'postgresql.exceptions.ReadOnlyTransactionError'>, '25007': <class 'postgresql.exceptions.SchemaAndDataStatementsError'>, '25008': <class 'postgresql.exceptions.InconsistentCursorIsolationError'>, '25P01': <class 'postgresql.exceptions.NoActiveTransactionError'>, '25P02': <class 'postgresql.exceptions.InFailedTransactionError'>, '26000': <class 'postgresql.exceptions.StatementNameError'>, '27000': <class 'postgresql.exceptions.TriggeredDataChangeViolation'>, '28000': <class 'postgresql.exceptions.AuthenticationSpecificationError'>, '2B000': <class 'postgresql.exceptions.DPDSEError'>, '2BP01': <class 'postgresql.exceptions.DPDSEObjectError'>, '2D000': <class 'postgresql.exceptions.TransactionTerminationError'>, '2F000': <class 'postgresql.exceptions.SREError'>, '2F002': <class 'postgresql.exceptions.DataModificationProhibitedError'>, '2F003': <class 'postgresql.exceptions.StatementProhibitedError'>, '2F004': <class 'postgresql.exceptions.ReadingDataProhibitedError'>, '2F005': <class 'postgresql.exceptions.FunctionExecutedNoReturnStatementError'>, '34000': <class 'postgresql.exceptions.CursorNameError'>, '38000': <class 'postgresql.exceptions.EREError'>, '38001': <class 'postgresql.exceptions.ContainingSQLNotPermittedError'>, '38002': <class 'postgresql.exceptions.ModifyingSQLDataNotPermittedError'>, '38003': <class 'postgresql.exceptions.ProhibitedSQLStatementError'>, '38004': <class 'postgresql.exceptions.ReadingSQLDataNotPermittedError'>, '39000': <class 'postgresql.exceptions.ERIEError'>, '39001': <class 'postgresql.exceptions.InvalidSQLState'>, '39004': <class 'postgresql.exceptions.NullValueNotAllowed'>, '39P01': <class 'postgresql.exceptions.TriggerProtocolError'>, '39P02': <class 'postgresql.exceptions.SRFProtocolError'>, '3B000': <class 'postgresql.exceptions.SavepointError'>, '3B001': <class 'postgresql.exceptions.InvalidSavepointSpecificationError'>, '3D000': <class 'postgresql.exceptions.CatalogNameError'>, '3F000': <class 'postgresql.exceptions.SchemaNameError'>, '40000': <class 'postgresql.exceptions.TRError'>, '40001': <class 'postgresql.exceptions.SerializationError'>, '40002': <class 'postgresql.exceptions.IntegrityConstraintViolationError'>, '40003': <class 'postgresql.exceptions.StatementCompletionUnknownError'>, '40P01': <class 'postgresql.exceptions.DeadlockError'>, '42000': <class 'postgresql.exceptions.SEARVError'>, '42501': <class 'postgresql.exceptions.InsufficientPrivilegeError'>, '42601': <class 'postgresql.exceptions.SyntaxError'>, '42602': <class 'postgresql.exceptions.SEARVNameError'>, '42611': <class 'postgresql.exceptions.ColumnDefinitionError'>, '42622': <class 'postgresql.exceptions.NameTooLongError'>, '42701': <class 'postgresql.exceptions.DuplicateColumnError'>, '42702': <class 'postgresql.exceptions.AmbiguousColumnError'>, '42703': <class 'postgresql.exceptions.UndefinedColumnError'>, '42704': <class 'postgresql.exceptions.UndefinedObjectError'>, '42710': <class 'postgresql.exceptions.DuplicateObjectError'>, '42712': <class 'postgresql.exceptions.DuplicateAliasError'>, '42723': <class 'postgresql.exceptions.DuplicateFunctionError'>, '42725': <class 'postgresql.exceptions.AmbiguousFunctionError'>, '42803': <class 'postgresql.exceptions.GroupingError'>, '42804': <class 'postgresql.exceptions.TypeMismatchError'>, '42809': <class 'postgresql.exceptions.WrongObjectTypeError'>, '42830': <class 'postgresql.exceptions.ForeignKeyCreationError'>, '42846': <class 'postgresql.exceptions.CoercionError'>, '42883': <class 'postgresql.exceptions.UndefinedFunctionError'>, '42939': <class 'postgresql.exceptions.ReservedNameError'>, '42P01': <class 'postgresql.exceptions.UndefinedTableError'>, '42P02': <class 'postgresql.exceptions.UndefinedParameterError'>, '42P03': <class 'postgresql.exceptions.DuplicateCursorError'>, '42P04': <class 'postgresql.exceptions.DuplicateDatabaseError'>, '42P05': <class 'postgresql.exceptions.DuplicatePreparedStatementError'>, '42P06': <class 'postgresql.exceptions.DuplicateSchemaError'>, '42P07': <class 'postgresql.exceptions.DuplicateTableError'>, '42P08': <class 'postgresql.exceptions.AmbiguousParameterError'>, '42P09': <class 'postgresql.exceptions.AmbiguousAliasError'>, '42P10': <class 'postgresql.exceptions.ColumnReferenceError'>, '42P11': <class 'postgresql.exceptions.CursorDefinitionError'>, '42P12': <class 'postgresql.exceptions.DatabaseDefinitionError'>, '42P13': <class 'postgresql.exceptions.FunctionDefinitionError'>, '42P14': <class 'postgresql.exceptions.PreparedStatementDefinitionError'>, '42P15': <class 'postgresql.exceptions.SchemaDefinitionError'>, '42P16': <class 'postgresql.exceptions.TableDefinitionError'>, '42P17': <class 'postgresql.exceptions.ObjectDefinitionError'>, '42P18': <class 'postgresql.exceptions.IndeterminateTypeError'>, '42P19': <class 'postgresql.exceptions.RecursionError'>, '42P20': <class 'postgresql.exceptions.WindowError'>, '44000': <class 'postgresql.exceptions.WithCheckOptionError'>, '53000': <class 'postgresql.exceptions.IRError'>, '53100': <class 'postgresql.exceptions.DiskFullError'>, '53200': <class 'postgresql.exceptions.MemoryError'>, '53300': <class 'postgresql.exceptions.TooManyConnectionsError'>, '55000': <class 'postgresql.exceptions.ONIPSError'>, '55006': <class 'postgresql.exceptions.ObjectInUseError'>, '55P02': <class 'postgresql.exceptions.ImmutableRuntimeParameterError'>, '55P03': <class 'postgresql.exceptions.UnavailableLockError'>, '57000': <class 'postgresql.exceptions.OIError'>, '57014': <class 'postgresql.exceptions.QueryCanceledError'>, '57P01': <class 'postgresql.exceptions.AdminShutdownError'>, '57P02': <class 'postgresql.exceptions.CrashShutdownError'>, '57P03': <class 'postgresql.exceptions.ServerNotReadyError'>, '58000': <class 'postgresql.exceptions.SIOError'>, '58P01': <class 'postgresql.exceptions.UndefinedFileError'>, '58P02': <class 'postgresql.exceptions.DuplicateFileError'>, 'F0000': <class 'postgresql.exceptions.CFError'>, 'F0001': <class 'postgresql.exceptions.LockFileExistsError'>, 'P0000': <class 'postgresql.exceptions.PLPGSQLError'>, 'P0001': <class 'postgresql.exceptions.PLPGSQLRaiseError'>, 'P0002': <class 'postgresql.exceptions.PLPGSQLNoDataFoundError'>, 'P0003': <class 'postgresql.exceptions.PLPGSQLTooManyRowsError'>, 'XX000': <class 'postgresql.exceptions.InternalError'>, 'XX001': <class 'postgresql.exceptions.DataCorruptedError'>, 'XX002': <class 'postgresql.exceptions.IndexCorruptedError'>}, warning_container={'01-00': <class 'postgresql.exceptions.DriverWarning'>, '01-CP': <class 'postgresql.exceptions.IgnoredClientParameterWarning'>, '01-TP': <class 'postgresql.exceptions.TypeConversionWarning'>, '01000': <class 'postgresql.exceptions.Warning'>, '01003': <class 'postgresql.exceptions.NullValueEliminatedInSetFunctionWarning'>, '01004': <class 'postgresql.exceptions.StringDataRightTruncationWarning'>, '01006': <class 'postgresql.exceptions.PrivilegeNotRevokedWarning'>, '01007': <class 'postgresql.exceptions.PrivilegeNotGrantedWarning'>, '01008': <class 'postgresql.exceptions.ImplicitZeroBitPaddingWarning'>, '0100C': <class 'postgresql.exceptions.DynamicResultSetsReturnedWarning'>, '01P01': <class 'postgresql.exceptions.DeprecationWarning'>, '02000': <class 'postgresql.exceptions.NoDataWarning'>, '02001': <class 'postgresql.exceptions.NoMoreSetsReturned'>})[source]¶ Construct the code-to-error and code-to-warning associations.
postgresql.temporal
¶
Temporary PostgreSQL cluster for the process.
-
class
postgresql.temporal.
Temporal
[source]¶ Bases:
object
Manages a temporary cluster for the duration of the process.
Instances of this class reference a distinct cluster. These clusters are transient; they will only exist until the process exits.
Usage:
>>> from postgresql.temporal import pg_tmp >>> with pg_tmp: ... ps = db.prepare('SELECT 1') ... assert ps.first() == 1
Or pg_tmp can decorate a method or function.
-
postgresql.temporal.
pg_tmp
= <postgresql.temporal.Temporal object>¶ The process’ temporary cluster or connection source.
postgresql.installation
¶
Collect and access PostgreSQL installation information.
-
class
postgresql.installation.
Installation
(info: dict)[source]¶ Bases:
postgresql.api.Installation
Class providing a Python interface to PostgreSQL installation information.
-
info
= None¶ The pg_config information dictionary.
-
ssl
¶ Whether the installation was compiled with SSL support.
-
-
postgresql.installation.
default
(typ=<class 'postgresql.installation.Installation'>)[source]¶ Get the default Installation.
Uses default_pg_config() to identify the executable.
postgresql.cluster
¶
Create, control, and destroy PostgreSQL clusters.
postgresql.cluster provides a programmer’s interface to controlling a PostgreSQL cluster. It provides direct access to proper signalling interfaces.
-
class
postgresql.cluster.
Cluster
(installation, data_directory)[source]¶ Bases:
postgresql.api.Cluster
Interface to a PostgreSQL cluster.
Provides mechanisms to start, stop, restart, kill, drop, and initalize a cluster(data directory).
Cluster does not strive to be consistent with
pg_ctl
. This is considered to be a base class for managing a cluster, and is intended to be extended to accommodate for a particular purpose.-
connect
(**kw)[source]¶ Create an established connection from the connector.
Cluster must be running.
-
connector
(**kw)[source]¶ Create a postgresql.driver connector based on the given keywords and listen_addresses and port configuration in settings.
-
daemon_path
¶ Path to the executable to use to startup the cluster.
-
hba_file
¶ The path to the HBA file of the cluster.
-
init
(password=None, timeout=None, **kw)[source]¶ Create the cluster at the given data_directory using the provided keyword parameters as options to the command.
command_option_map provides the mapping of keyword arguments to command options.
-
pid
¶ If we have the subprocess, use the pid on the object.
-
ready_for_connections
()[source]¶ If the daemon is running, and is not in startup mode.
This only works for clusters configured for TCP/IP connections.
-
restart
(logfile=None, settings=None, timeout=10)[source]¶ Restart the cluster gracefully.
This provides a higher level interface to stopping then starting the cluster. It will perform the wait operations and block until the restart is complete.
If waiting is not desired, .start() and .stop() should be used directly.
-
running
()[source]¶ Whether or not the postmaster is running.
This does not mean the cluster is accepting connections.
-
settings
¶ A Settings interface to the
postgresql.conf
file associated with the cluster.
-
wait_until_started
(timeout=10, delay=0.05)[source]¶ After the start method is used, this can be ran in order to block until the cluster is ready for use.
This method loops until ready_for_connections returns True in order to make sure that the cluster is actually up.
-
wait_until_stopped
(timeout=10, delay=0.05)[source]¶ After the stop method is used, this can be ran in order to block until the cluster is shutdown.
Additionally, catching ClusterTimeoutError exceptions would be a starting point for making decisions about whether or not to issue a kill to the daemon.
-
-
exception
postgresql.cluster.
ClusterError
(message, code=None, details={}, source=None, creator=None)[source]¶ Bases:
postgresql.exceptions.Error
General cluster error.
-
exception
postgresql.cluster.
ClusterInitializationError
(message, code=None, details={}, source=None, creator=None)[source]¶ Bases:
postgresql.cluster.ClusterError
General cluster initialization failure.
-
exception
postgresql.cluster.
ClusterNotRunningError
(message, code=None, details={}, source=None, creator=None)[source]¶ Bases:
postgresql.cluster.ClusterError
Cluster is not running.
-
exception
postgresql.cluster.
ClusterStartupError
(message, code=None, details={}, source=None, creator=None)[source]¶ Bases:
postgresql.cluster.ClusterError
Cluster startup failed.
-
exception
postgresql.cluster.
ClusterTimeoutError
(message, code=None, details={}, source=None, creator=None)[source]¶ Bases:
postgresql.cluster.ClusterError
Cluster operation timed out.
-
class
postgresql.cluster.
ClusterWarning
(message, code=None, details={}, source=None, creator=None)[source]¶ Bases:
postgresql.exceptions.Warning
Warning issued by cluster operations.
-
exception
postgresql.cluster.
InitDBError
(message, code=None, details={}, source=None, creator=None)[source]¶ Bases:
postgresql.cluster.ClusterInitializationError
A non-zero result was returned by the initdb command.
postgresql.copyman
¶
Manage complex COPY operations; one-to-many COPY streaming.
Primarily this module houses the CopyManager class, and the transfer function for a high-level interface to using the CopyManager.
-
class
postgresql.copyman.
CallReceiver
(callable)[source]¶ Bases:
postgresql.copyman.Receiver
Call the given object with a list of COPY lines.
-
exception
postgresql.copyman.
CopyFail
(manager, reason=None, receiver_faults=None, producer_fault=None)[source]¶ Bases:
Exception
Exception thrown by the CopyManager when the COPY operation failed.
The ‘manager’ attribute the CopyManager that raised the CopyFail.
The ‘reason’ attribute is a string indicating why it failed.
The ‘receiver_faults’ attribute is a mapping of receivers to exceptions that were raised on exit.
The ‘producer_fault’ attribute specifies if the producer raise an exception on exit.
-
class
postgresql.copyman.
CopyManager
(producer, *receivers)[source]¶ Bases:
postgresql.python.element.Element
,collections.abc.Iterator
A class for managing COPY operations.
Connects the producer to the receivers.
-
class
postgresql.copyman.
Fitting
[source]¶ Bases:
postgresql.python.element.Element
-
protocol
¶ The COPY data format produced or consumed.
-
-
class
postgresql.copyman.
IteratorProducer
(iterator)[source]¶ Bases:
postgresql.copyman.Producer
-
class
postgresql.copyman.
NullProducer
[source]¶ Bases:
postgresql.copyman.Producer
Produces no copy data.
-
class
postgresql.copyman.
NullReceiver
[source]¶ Bases:
postgresql.copyman.Receiver
-
class
postgresql.copyman.
Producer
[source]¶ Bases:
postgresql.copyman.Fitting
,collections.abc.Iterator
-
exception
postgresql.copyman.
ProducerFault
(manager)[source]¶ Bases:
postgresql.copyman.Fault
Exception raised when the Producer caused an exception.
Normally, Producer faults are fatal.
-
class
postgresql.copyman.
ProtocolProducer
(recv_into, buffer_size=10240)[source]¶ Bases:
postgresql.copyman.Producer
Producer using a PQv3 data stream.
Normally, this class needs to be subclassed as it assumes that the given recv_into function will write COPY messages.
-
class
postgresql.copyman.
ProtocolReceiver
(send)[source]¶ Bases:
postgresql.copyman.Receiver
-
class
postgresql.copyman.
Receiver
[source]¶ Bases:
postgresql.copyman.Fitting
-
exception
postgresql.copyman.
ReceiverFault
(manager, faults)[source]¶ Bases:
postgresql.copyman.Fault
Exception raised when Receivers cause an exception.
Faults should be trapped if recovery from an exception is possible, or if the failed receiver is optional to the succes of the operation.
The ‘manager’ attribute is the CopyManager that raised the fault.
The ‘faults’ attribute is a dictionary mapping the receiver to the exception instance raised.
-
class
postgresql.copyman.
WireState
(condition=<method-wrapper '__ne__' of int object>)[source]¶ Bases:
object
Manages the state of the wire.
This class manages three possible positions:
- Between wire messages
- Inside message header
- Inside message (with complete header)
The wire state will become unusable when the configured condition is True.
-
postgresql.copyman.
default_buffer_size
= 10240¶ 10KB buffer for COPY messages by default.
-
postgresql.copyman.
transfer
(producer, *receivers)[source]¶ Perform a COPY operation using the given statements:
>>> import copyman >>> copyman.transfer(src.prepare("COPY table TO STDOUT"), dst.prepare("COPY table FROM STDIN"))
-
postgresql.copyman.
ulong_pack
()¶ S.pack(v1, v2, …) -> bytes
Return a bytes object containing values v1, v2, … packed according to the format string S.format. See help(struct) for more on format strings.
postgresql.alock
¶
Tools for Advisory Locks
-
class
postgresql.alock.
ALock
(database, *identifiers)[source]¶ Bases:
postgresql.python.element.Element
Advisory Lock class for managing the acquisition and release of a sequence of PostgreSQL advisory locks.
ALock()’s are fairly consistent with threading.RLock()’s. They can be acquired multiple times, and they must be released the same number of times for the lock to actually be released.
A notably difference is that ALock’s manage a sequence of lock identifiers. This means that a given ALock() may represent multiple advisory locks.
-
acquire
(blocking=True, len=<built-in function len>)[source]¶ Acquire the locks using the configured identifiers.
-
locked
()[source]¶ Whether the locks have been acquired. This method is sensitive to the connection’s state. If the connection is closed, it will return False.
-
mode
¶ The mode of the lock class.
-
-
class
postgresql.alock.
ExclusiveLock
(database, *identifiers)[source]¶ Bases:
postgresql.alock.ALock
Bases:
postgresql.alock.ALock
Changes¶
Changes in v1.2¶
1.2.2 released on 2020-09-22¶
- Correct broken Connection.proc.
- Correct IPv6 IRI host oversight.
- Document an ambiguity case of DB-API 2.0 connection creation and the workaround(unix vs host/port).
- (Pending, active in 1.3) DB-API 2.0 connect() failures caused an undesired exception chain; ClientCannotConnect is now raised.
- Minor maintenance on tests and support modules.
1.2.0 released on 2016-06-23¶
- PostgreSQL 9.3 compatibility fixes (Elvis)
- Python 3.5 compatibility fixes (Elvis)
- Add support for JSONB type (Elvis)
Changes in v1.1¶
1.1.0¶
- Remove two-phase commit interfaces per deprecation in v1.0. For proper two phase commit use, a lock manager must be employed that the implementation did nothing to accommodate for.
- Add support for unpacking anonymous records (Elvis)
- Support PostgreSQL 9.2 (Elvis)
- Python 3.3 Support (Elvis)
- Add column execution method. (jwp)
- Add one-shot statement interface. Connection.query.* (jwp)
- Modify the inet/cidr support by relying on the ipaddress module introduced in Python 3.3 (Google’s ipaddr project) The existing implementation relied on simple str() representation supported by the socket module. Unfortunately, MS Windows’ socket library does not appear to support the necessary functionality, or Python’s socket module does not expose it. ipaddress fixes the problem.
Note
The ipaddress module is now required for local inet and cidr. While it is of “preliminary” status, the ipaddr project has been around for some time and well supported. ipaddress appears to be the safest way forward for native network types.
Changes in v1.0¶
1.0.4 in development¶
- Alter how changes are represented in documentation to simplify merging.
1.0.3 released on 2011-09-24¶
- Use raise x from y to generalize exceptions. (Elvis Pranskevichus)
- Alter postgresql.string.quote_ident to always quote. (Elvis Pranskevichus)
- Add postgresql.string.quote_ident_if_necessary (Modification of Elvis Pranskevichus’ patch)
- Many postgresql.string bug fixes (Elvis Pranskevichus)
- Correct ResourceWarnings improving Python 3.2 support. (jwp)
- Add test command to setup.py (Elvis Pranskevichus)
1.0.2 released on 2010-09-18¶
- Add support for DOMAINs in registered composites. (Elvis Pranskevichus)
- Properly raise StopIteration in Cursor.__next__. (Elvis Pranskevichus)
- Add Cluster Management documentation.
- Release savepoints after rolling them back.
- Fix Startup() usage for Python 3.2.
- Emit deprecation warning when ‘gid’ is given to xact().
- Compensate for Python3.2’s ElementTree API changes.
1.0.1 released on 2010-04-24¶
- Fix unpacking of array NULLs. (Elvis Pranskevichus)
- Fix .first()’s handling of counts and commands. Bad logic caused zero-counts to return the command tag.
- Don’t interrupt and close a temporal connection if it’s not open.
- Use the Driver’s typio attribute for TypeIO overrides. (Elvis Pranskevichus)
1.0 released on 2010-03-27¶
- DEPRECATION: Removed 2PC support documentation.
- DEPRECATION: Removed pg_python and pg_dotconf ‘scripts’. They are still accessible by python3 -m postgresql.bin.pg_*
- Add support for binary hstore.
- Add support for user service files.
- Implement a Copy manager for direct connection-to-connection COPY operations.
- Added db.do() method for DO-statement support(convenience method).
- Set the default client_min_messages level to WARNING. NOTICEs are often not desired by programmers, and py-postgresql’s high verbosity further irritates that case.
- Added postgresql.project module to provide project information. Project name, author, version, etc.
- Increased default recvsize and chunksize for improved performance.
- ‘D’ messages are special cased as builtins.tuples instead of protocol.element3.Tuple
- Alter Statement.chunks() to return chunks of builtins.tuple. Being an interface intended for speed, types.Row() impedes its performance.
- Fix handling of infinity values with timestamptz, timestamp, and date. [Bug reported by Axel Rau.]
- Correct representation of PostgreSQL ARRAYs by properly recording lowerbounds and upperbounds. Internally, sub-ARRAYs have their own element lists.
- Implement a NotificationManager for managing the NOTIFYs received by a connection. The class can manage NOTIFYs from multiple connections, whereas the db.wait() method is tailored for single targets.
- Implement an ALock class for managing advisory locks using the threading.Lock APIs. [Feedback from Valentine Gogichashvili]
- Implement reference symbols. Allow libraries to define symbols that are used to create queries that inherit the original symbol’s type and execution method.
db.prepare(db.prepare(...).first())
- Fix handling of unix domain sockets by pg.open and driver.connect. [Reported by twitter.com/rintavarustus]
- Fix typo/dropped parts of a raise LoadError in .lib. [Reported by Vlad Pranskevichus]
- Fix db.tracer and pg_python’s –pq-trace=
- Fix count return from .first() method. Failed to provide an empty tuple for the rformats of the bind statement. [Reported by dou dou]
Sample Code¶
Using postgresql.driver:
>>> import postgresql
>>> db = postgresql.open("pq://user:password@host/name_of_database")
>>> db.execute("CREATE TABLE emp (emp_name text PRIMARY KEY, emp_salary numeric)")
>>>
>>> # Create the statements.
>>> make_emp = db.prepare("INSERT INTO emp VALUES ($1, $2)")
>>> raise_emp = db.prepare("UPDATE emp SET emp_salary = emp_salary + $2 WHERE emp_name = $1")
>>> get_emp_with_salary_lt = db.prepare("SELECT emp_name FROM emp WHERE emp_salay < $1")
>>>
>>> # Create some employees, but do it in a transaction--all or nothing.
>>> with db.xact():
... make_emp("John Doe", "150,000")
... make_emp("Jane Doe", "150,000")
... make_emp("Andrew Doe", "55,000")
... make_emp("Susan Doe", "60,000")
>>>
>>> # Give some raises
>>> with db.xact():
... for row in get_emp_with_salary_lt("125,000"):
... print(row["emp_name"])
... raise_emp(row["emp_name"], "10,000")
Of course, if DB-API 2.0 is desired, the module is located at postgresql.driver.dbapi20. DB-API extends PG-API, so the features illustrated above are available on DB-API connections.
See Driver for more information.