Components

Component Base

class powerpool.lib.Component[source]

Abstract base class documenting the component architecture expectations. Each major part of powerpool inherits from this class.

_configure(config)[source]

Applies defaults and checks requirements of component configuration

_incr(counter, amount=1)[source]
_lookup(key)[source]
defaults = {}
gl_methods = []
key = None
name
one_min_stats = []
one_sec_stats = []
start()[source]

Called when the application is starting.

status

Should return a json convertable data structure to be shown in the web interface.

stop()[source]

Called when the application is trying to exit. Should not block.

update_config(updated_config)[source]

A call performed when the configuration file gets reloaded at runtime. self.raw_config will have bee pre-populated by the manager before call is made.

Since configuration values of certain components can’t be reloaded at runtime it’s good practice to log a warning when a change is detected but can’t be implemented. Not currently used, but set aside for sunnier days.

PowerPool (manager)

class powerpool.main.PowerPool(config)[source]

This is a singelton class that manages starting/stopping of the server, along with all statistical counters rotation schedules. It takes the raw config and distributes it to each module, as well as loading dynamic modules.

It also handles logging facilities by being the central logging registry. Each module can “register” a logger with the main object, which attaches it to configured handlers.

_tick_stats()[source]

A greenlet that handles rotation of statistics

defaults = {'loggers': [{'type': 'StreamHandler', 'level': 'NOTSET'}], 'server_number': 0, 'datagram': {'host': '127.0.0.1', 'enabled': False, 'port': 6855}, 'default_component_log_level': 'INFO', 'term_timeout': 10, 'procname': 'powerpool', 'algorithms': {'scrypt': {'hashes_per_share': 65536, 'module': 'ltc_scrypt.getPoWHash'}, 'x11': {'hashes_per_share': 4294967296, 'module': 'drk_hash.getPoWHash'}, 'scryptn': {'hashes_per_share': 65536, 'module': 'vtc_scrypt.getPoWHash'}, 'sha256': {'hashes_per_share': 4294967296, 'module': 'cryptokit.sha256d'}, 'lyra2re': {'hashes_per_share': 33554432, 'module': 'lyra2re_hash.getPoWHash'}, 'blake256': {'hashes_per_share': 65536, 'module': 'blake_hash.getPoWHash'}}, 'extranonce_size': 4, 'events': {'host': '127.0.0.1', 'enabled': False, 'port': 8125}, 'extranonce_serv_size': 4}
dump_objgraph()[source]

Dump garbage collection information on SIGUSR1 to aid debugging memory leaks

exit(signal=None)[source]

Handle an exit request

classmethod from_raw_config(raw_config, args)[source]
gl_methods = ['_tick_stats']
handle(data, address)[source]
log_event(event)[source]

Sends an event to statsd

manager = None
register_logger(name)[source]
register_stat_counters(comp, min_counters, sec_counters=None)[source]

Creates and adds the stat counters to internal tracking dictionaries. These dictionaries are iterated to perform stat rotation, as well as accessed to perform stat logging

start()[source]
status

For display in the http monitor

Stratum Server

Handles spawning one or many stratum servers (which bind to a single port each), as well as spawning corresponding agent servers as well. It holds data structures that allow lookup of all StratumClient objects.

class powerpool.stratum_server.StratumServer(config)[source]

A single port binding of our stratum server.

_spawn = None
add_client(client)[source]
defaults = {'minimum_manual_diff': 64, 'reporter': None, 'server_seed': 0, 'vardiff': {'spm_target': 20, 'tiers': [8, 16, 32, 64, 96, 128, 192, 256, 512], 'interval': 30, 'enabled': False}, 'agent': {'port_diff': 1111, 'enabled': False, 'timeout': 120, 'accepted_types': ['temp', 'status', 'hashrate', 'thresholds']}, 'address': '0.0.0.0', 'start_difficulty': 128, 'port': 3333, 'aliases': {}, 'idle_worker_disconnect_threshold': 3600, 'donate_key': 'donate', 'valid_address_versions': [], 'jobmanager': None, 'algo': 2345987234589723495872345L, 'idle_worker_threshold': 300, 'push_job_interval': 30}
handle(sock, address)[source]

A new connection appears on the server, so setup a new StratumClient object to manage it.

new_job(event)[source]

Gets called whenever there’s a new job generated by our jobmanager.

one_min_stats = ['stratum_connects', 'stratum_disconnects', 'agent_connects', 'agent_disconnects', 'reject_low_share_n1', 'reject_dup_share_n1', 'reject_stale_share_n1', 'acc_share_n1', 'reject_low_share_count', 'reject_dup_share_count', 'reject_stale_share_count', 'acc_share_count', 'unk_err', 'not_authed_err', 'not_subbed_err']
remove_client(client)[source]

Manages removing the StratumClient from the luts

set_user(client)[source]

Add the client (or create) appropriate worker and address trackers

start(*args, **kwargs)[source]
status

For display in the http monitor

stop(*args, **kwargs)[source]
class powerpool.stratum_server.StratumClient(sock, address, logger, manager, server, reporter, algo, config)[source]

Object representation of a single stratum connection to the server.

DUP_SHARE = 1
DUP_SHARE_ERR = 22
LOW_DIFF_ERR = 23
LOW_DIFF_SHARE = 2
STALE_SHARE = 3
STALE_SHARE_ERR = 21
VALID_SHARE = 0
_incr(*args)[source]
_push(job, flush=False, block=True)[source]

Abbreviated push update that will occur when pushing new block notifications. Mico-optimized to try and cut stale share rates as much as possible.

details

Displayed on the single client view in the http status monitor

error_counter = {24: 'not_authed_err', 25: 'not_subbed_err', 20: 'unk_err'}
errors = {20: 'Other/Unknown', 21: 'Job not found (=stale)', 22: 'Duplicate share', 23: 'Low difficulty share', 24: 'Unauthorized worker', 25: 'Not subscribed'}
last_share_submit_delta
push_difficulty()[source]

Pushes the current difficulty to the client. Currently this only happens uppon initial connect, but would be used for vardiff

push_job(flush=False, timeout=False)[source]

Pushes the latest job down to the client. Flush is whether or not he should dump his previous jobs or not. Dump will occur when a new block is found since work on the old block is invalid.

read(*args, **kwargs)[source]
recalc_vardiff()[source]
send_error(num=20, id_val=1)[source]

Utility for transmitting an error to the client

send_success(id_val=1)[source]

Utility for transmitting success to the client

share_type_strings = {0: 'acc', 1: 'dup', 2: 'low', 3: 'stale'}
submit_job(data, t)[source]

Handles recieving work submission and checking that it is valid , if it meets network diff, etc. Sends reply to stratum client.

summary

Displayed on the all client view in the http status monitor

Jobmanager

This module generates mining jobs and sends them to workers. It must provide current jobs for the stratum server to be able to push. The reference implementation monitors an RPC daemon server.

class powerpool.jobmanagers.monitor_aux_network.MonitorAuxNetwork(config)[source]
_check_new_jobs(*args, **kwargs)[source]
defaults = {'signal': None, 'enabled': False, 'send': True, 'currency': 2345987234589723495872345L, 'rpc_ping_int': 2, 'algo': 2345987234589723495872345L, 'flush': False, 'coinservs': 2345987234589723495872345L, 'work_interval': 1}
found_block(address, worker, header, coinbase_raw, job, start)[source]
gl_methods = ['_monitor_nodes', '_check_new_jobs']
one_min_stats = ['work_restarts', 'new_jobs']
start()[source]
status
class powerpool.jobmanagers.monitor_network.MonitorNetwork(config)[source]
_check_new_jobs(*args, **kwargs)[source]
_poll_height(*args, **kwargs)[source]
config = {'diff1': 1766820104831717178943502833727831496196810259731196417549125097682370560L, 'pool_address': '', 'pow_block_hash': False, 'max_blockheight': None, 'block_poll': 0.2, 'hashes_per_share': 65535, 'currency': 2345987234589723495872345L, 'rpc_ping_int': 2, 'algo': 2345987234589723495872345L, 'poll': None, 'payout_drk_mn': True, 'coinservs': 2345987234589723495872345L, 'signal': None, 'merged': (), 'job_refresh': 15}
defaults = {'diff1': 1766820104831717178943502833727831496196810259731196417549125097682370560L, 'pool_address': '', 'pow_block_hash': False, 'max_blockheight': None, 'block_poll': 0.2, 'hashes_per_share': 65535, 'currency': 2345987234589723495872345L, 'rpc_ping_int': 2, 'algo': 2345987234589723495872345L, 'poll': None, 'payout_drk_mn': True, 'coinservs': 2345987234589723495872345L, 'signal': None, 'merged': (), 'job_refresh': 15}
found_block(raw_coinbase, address, worker, hash_hex, header, job, start)[source]

Submit a valid block (hopefully!) to the RPC servers

generate_job(push=False, flush=False, new_block=False, network='main')[source]

Creates a new job for miners to work on. Push will trigger an event that sends new work but doesn’t force a restart. If flush is true a job restart will be triggered.

getblocktemplate(new_block=False, signal=False)[source]
new_merged_work(event)[source]
one_min_stats = ['work_restarts', 'new_jobs', 'work_pushes']
start()[source]
status

For display in the http monitor

Reporters

The reporter is responsible for transmitting shares, mining statistics, and new blocks to some external storage. The reference implementation is the CeleryReporter which aggregates shares into batches and logs them in a way designed to interface with SimpleCoin. The reporter is also responsible for tracking share rates for vardiff. This makes sense if you want vardiff to be based off the shares per second of an entire address, instead of a single connection.

class powerpool.reporters.base.Reporter[source]

An abstract base class to document the Reporter interface.

add_block(address, height, total_subsidy, fees, hex_bits, hash, merged, worker, algo)[source]

Called when a share is submitted with a hash that is valid for the network.

agent_send(address, worker, typ, data, time)[source]

Called when valid data is recieved from a PPAgent connection.

log_share(client, diff, typ, params, job=None, header_hash=None, header=None, start=None, **kwargs)[source]

Logs a share to external sources for payout calculation and statistics

class powerpool.reporters.redis_reporter.RedisReporter(config)[source]
_queue_add_block(address, height, total_subsidy, fees, hex_bits, hex_hash, currency, algo, merged=False, worker=None, **kwargs)[source]
_queue_agent_send(address, worker, typ, data, stamp)[source]
_queue_log_one_minute(address, worker, algo, stamp, typ, amount)[source]
_queue_log_share(address, shares, algo, currency, merged=False)[source]
agent_send(*args, **kwargs)[source]
defaults = {'pool_report_configs': {}, 'redis': {}, 'attrs': {}, 'chain': 1}
gl_methods = ['_queue_proc', '_report_one_min']
log_share(client, diff, typ, params, job=None, header_hash=None, header=None, **kwargs)[source]
one_sec_stats = ['queued']
status

Monitor

class powerpool.monitor.ServerMonitor(config)[source]

Provides a few useful json endpoints for viewing server health and performance.

client(comp_key, username)[source]
clients_0_5()[source]

Legacy client view emulating version 0.5 support

clients_comp(comp_key)[source]
comp(comp_key)[source]
comp_config(comp_key)[source]
counters()[source]
debug()[source]
defaults = {'DEBUG': False, 'JSONIFY_PRETTYPRINT_REGULAR': False, 'port': 3855, 'JSON_SORT_KEYS': False, 'address': '127.0.0.1'}
general()[source]
general_0_5()[source]

Legacy 0.5 emulating view

handler_class

alias of CustomWSGIHandler

start(*args, **kwargs)[source]
stop(*args, **kwargs)[source]