123456789_123456789_123456789_123456789_123456789_

Class: Puma::Server

Relationships & Source Files
Super Chains via Extension / Inclusion / Inheritance
Instance Chain:
self, Request, Const
Inherits: Object
Defined in: lib/puma/server.rb

Overview

The HTTP Server itself. Serves out a single Rack app.

This class is used by the Single and Cluster classes to generate one or more Server instances capable of handling requests. Each Puma process will contain one Server instance.

The Server instance pulls requests from the socket, adds them to a Reactor where they get eventually passed to a ThreadPool.

Each Server will have one reactor and one thread pool.

Constant Summary

Const - Included

BANNED_HEADER_KEY, CGI_VER, CHUNKED, CHUNK_SIZE, CLOSE, CLOSE_CHUNKED, CODE_NAME, COLON, CONNECTION_CLOSE, CONNECTION_KEEP_ALIVE, CONTENT_LENGTH, CONTENT_LENGTH2, CONTENT_LENGTH_S, CONTINUE, DQUOTE, EARLY_HINTS, ERROR_RESPONSE, GATEWAY_INTERFACE, HALT_COMMAND, HEAD, HIJACK, HIJACK_IO, HIJACK_P, HTTP, HTTPS, HTTPS_KEY, HTTP_10_200, HTTP_11, HTTP_11_100, HTTP_11_200, HTTP_CONNECTION, HTTP_EXPECT, HTTP_HEADER_DELIMITER, HTTP_HOST, HTTP_VERSION, HTTP_X_FORWARDED_FOR, HTTP_X_FORWARDED_PROTO, HTTP_X_FORWARDED_SCHEME, HTTP_X_FORWARDED_SSL, IANA_HTTP_METHODS, ILLEGAL_HEADER_KEY_REGEX, ILLEGAL_HEADER_VALUE_REGEX, KEEP_ALIVE, LINE_END, LOCALHOST, LOCALHOST_IPV4, LOCALHOST_IPV6, MAX_BODY, MAX_HEADER, NEWLINE, PATH_INFO, PORT_443, PORT_80, PROXY_PROTOCOL_V1_REGEX, PUMA_CONFIG, PUMA_PEERCERT, PUMA_SERVER_STRING, PUMA_SOCKET, PUMA_TMP_BASE, PUMA_VERSION, QUERY_STRING, RACK_AFTER_REPLY, RACK_INPUT, RACK_RESPONSE_FINISHED, RACK_URL_SCHEME, REMOTE_ADDR, REQUEST_METHOD, REQUEST_PATH, REQUEST_URI, RESTART_COMMAND, SERVER_NAME, SERVER_PORT, SERVER_PROTOCOL, SERVER_SOFTWARE, STOP_COMMAND, SUPPORTED_HTTP_METHODS, TRANSFER_ENCODING, TRANSFER_ENCODING2, TRANSFER_ENCODING_CHUNKED, UNMASKABLE_HEADERS, UNSPECIFIED_IPV4, UNSPECIFIED_IPV6, WRITE_TIMEOUT

Request - Included

BODY_LEN_MAX, CUSTOM_STAT, IO_BODY_MAX, IO_BUFFER_LEN_MAX, SOCKET_WRITE_ERR_MSG

Class Attribute Summary

Class Method Summary

Instance Attribute Summary

Instance Method Summary

Request - Included

#default_server_port,
#handle_request

Takes the request contained in client, invokes the Rack application to construct the response and writes it back to client.io.

#prepare_response

Assembles the headers and prepares the body for actually sending the response via #fast_write_response.

#fast_write_response

Used to write headers and body.

#fast_write_str

Used to write ‘early hints’, ‘no body’ responses, ‘hijacked’ responses, and body segments (called by fast_write_response).

#fetch_status_code, #illegal_header_key?, #illegal_header_value?,
#normalize_env

Given a Hash env for the request read from client, add and fixup keys to comply with Rack’s env guidelines.

#req_env_post_parse

Fixup any headers with , in the name to have _ now.

#str_early_hints

Used in the lambda for env[ Const::EARLY_HINTS ].

#str_headers

Processes and write headers to the IOBuffer.

Constructor Details

.new(app, events = nil, options = {}) ⇒ Server

Note:

Several instance variables exist so they are available for testing, and have default values set via fetch. Normally the values are set via Configuration#puma_default_options.

Note:

The #events parameter is set to nil, and set to Events.new in code. Often #options needs to be passed, but #events does not. Using nil allows calling code to not require events.rb.

Create a server for the rack app #app.

#log_writer is a LogWriter object used to log info and error messages.

#events is a Events object used to notify application status events.

#run returns a thread that you can join on to wait for the server to do its work.

[ GitHub ]

  
# File 'lib/puma/server.rb', line 71

def initialize(app, events = nil, options = {})
  @app = app
  @events = events || Events.new

  @check, @notify = nil
  @status = :stop

  @thread = nil
  @thread_pool = nil

  @options = if options.is_a?(UserFileDefaultOptions)
    options
  else
    UserFileDefaultOptions.new(options, Configuration::DEFAULTS)
  end

  @clustered                 = (@options.fetch :workers, 0) > 0
  @worker_write              = @options[:worker_write]
  @log_writer                = @options.fetch :log_writer, LogWriter.stdio
  @early_hints               = @options[:early_hints]
  @first_data_timeout        = @options[:first_data_timeout]
  @persistent_timeout        = @options[:persistent_timeout]
  @idle_timeout              = @options[:idle_timeout]
  @min_threads               = @options[:min_threads]
  @max_threads               = @options[:max_threads]
  @queue_requests            = @options[:queue_requests]
  @max_keep_alive            = @options[:max_keep_alive]
  @enable_keep_alives        = @options[:enable_keep_alives]
  @enable_keep_alives      &&= @queue_requests
  @io_selector_backend       = @options[:io_selector_backend]
  @http_content_length_limit = @options[:http_content_length_limit]

  # make this a hash, since we prefer `key?` over `include?`
  @supported_http_methods =
    if @options[:supported_http_methods] == :any
      :any
    else
      if (ary = @options[:supported_http_methods])
        ary
      else
        SUPPORTED_HTTP_METHODS
      end.sort.product([nil]).to_h.freeze
    end

  temp = !!(@options[:environment] =~ /\A(development|test)\z/)
  @leak_stack_on_error = @options[:environment] ? temp : true

  @binder = Binder.new(log_writer)

  ENV['RACK_ENV'] ||= "development"

  @mode = :http

  @precheck_closing = true

  @requests_count = 0

  @idle_timeout_reached = false
end

Class Attribute Details

.closed_socket_supported?Boolean (readonly)

This method is for internal use only.

Version:

  • 5.0.0

[ GitHub ]

  
# File 'lib/puma/server.rb', line 149

def closed_socket_supported?
  Socket.const_defined?(:TCP_INFO) && Socket.const_defined?(:IPPROTO_TCP)
end

.current (readonly)

[ GitHub ]

  
# File 'lib/puma/server.rb', line 137

def current
  Thread.current.puma_server
end

.tcp_cork_supported?Boolean (readonly)

This method is for internal use only.

Version:

  • 5.0.0

[ GitHub ]

  
# File 'lib/puma/server.rb', line 143

def tcp_cork_supported?
  Socket.const_defined?(:TCP_CORK) && Socket.const_defined?(:IPPROTO_TCP)
end

Instance Attribute Details

#app (rw)

[ GitHub ]

  
# File 'lib/puma/server.rb', line 50

attr_accessor :app

#auto_trim_time (readonly)

TODO:

the following may be deprecated in the future

[ GitHub ]

  
# File 'lib/puma/server.rb', line 46

attr_reader :auto_trim_time, :early_hints, :first_data_timeout,
  :leak_stack_on_error,
  :persistent_timeout, :reaping_time

#backlog (readonly)

[ GitHub ]

  
# File 'lib/puma/server.rb', line 215

def backlog
  @thread_pool&.backlog
end

#binder (rw)

[ GitHub ]

  
# File 'lib/puma/server.rb', line 51

attr_accessor :binder

#busy_threads (readonly)

[ GitHub ]

  
# File 'lib/puma/server.rb', line 237

def busy_threads
  @thread_pool&.busy_threads
end

#connected_ports (readonly)

[ GitHub ]

  
# File 'lib/puma/server.rb', line 706

def connected_ports
  @binder.connected_ports
end

#early_hints (readonly)

TODO:

the following may be deprecated in the future

[ GitHub ]

  
# File 'lib/puma/server.rb', line 46

attr_reader :auto_trim_time, :early_hints, :first_data_timeout,
  :leak_stack_on_error,
  :persistent_timeout, :reaping_time

#events (readonly)

[ GitHub ]

  
# File 'lib/puma/server.rb', line 41

attr_reader :events

#first_data_timeout (readonly)

TODO:

the following may be deprecated in the future

[ GitHub ]

  
# File 'lib/puma/server.rb', line 46

attr_reader :auto_trim_time, :early_hints, :first_data_timeout,
  :leak_stack_on_error,
  :persistent_timeout, :reaping_time

#leak_stack_on_error (readonly)

TODO:

the following may be deprecated in the future

[ GitHub ]

  
# File 'lib/puma/server.rb', line 46

attr_reader :auto_trim_time, :early_hints, :first_data_timeout,
  :leak_stack_on_error,
  :persistent_timeout, :reaping_time

#log_writer (readonly)

[ GitHub ]

  
# File 'lib/puma/server.rb', line 40

attr_reader :log_writer

#max_threads (readonly)

for #stats

[ GitHub ]

  
# File 'lib/puma/server.rb', line 42

attr_reader :min_threads, :max_threads  # for #stats

#min_threads (readonly)

for #stats

[ GitHub ]

  
# File 'lib/puma/server.rb', line 42

attr_reader :min_threads, :max_threads  # for #stats

#options (readonly)

[ GitHub ]

  
# File 'lib/puma/server.rb', line 38

attr_reader :options

#persistent_timeout (readonly)

TODO:

the following may be deprecated in the future

[ GitHub ]

  
# File 'lib/puma/server.rb', line 46

attr_reader :auto_trim_time, :early_hints, :first_data_timeout,
  :leak_stack_on_error,
  :persistent_timeout, :reaping_time

#pool_capacity (readonly)

This number represents the number of requests that the server is capable of taking right now.

For example if the number is 5 then it means there are 5 threads sitting idle ready to take a request. If one request comes in, then the value would be 4 until it finishes processing.

[ GitHub ]

  
# File 'lib/puma/server.rb', line 232

def pool_capacity
  @thread_pool&.pool_capacity
end

#reaping_time (readonly)

TODO:

the following may be deprecated in the future

[ GitHub ]

  
# File 'lib/puma/server.rb', line 46

attr_reader :auto_trim_time, :early_hints, :first_data_timeout,
  :leak_stack_on_error,
  :persistent_timeout, :reaping_time

#requests_count (readonly)

Version:

  • 5.0.0

[ GitHub ]

  
# File 'lib/puma/server.rb', line 43

attr_reader :requests_count             # @version 5.0.0

#running (readonly)

[ GitHub ]

  
# File 'lib/puma/server.rb', line 220

def running
  @thread_pool&.spawned
end

#shutting_down?Boolean (readonly)

[ GitHub ]

  
# File 'lib/puma/server.rb', line 653

def shutting_down?
  @status == :stop || @status == :restart
end

#statsHash (readonly)

Returns a hash of stats about the running server for reporting purposes.

Returns:

  • (Hash)

    hash containing stat info from Server and ThreadPool

Version:

  • 5.0.0

[ GitHub ]

  
# File 'lib/puma/server.rb', line 674

def stats
  stats = @thread_pool&.stats || {}
  stats[:max_threads]    = @max_threads
  stats[:requests_count] = @requests_count
  stats[:reactor_max] = @reactor.reactor_max
  reset_max
  stats
end

#thread (readonly)

[ GitHub ]

  
# File 'lib/puma/server.rb', line 39

attr_reader :thread

Instance Method Details

#add_ssl_listener(host, port, ctx, optimize_for_latency = true, backlog = 1024)

[ GitHub ]

  
# File 'lib/puma/server.rb', line 696

def add_ssl_listener(host, port, ctx, optimize_for_latency = true,
                     backlog = 1024)
  @binder.add_ssl_listener host, port, ctx, optimize_for_latency, backlog
end

#add_tcp_listener(host, port, optimize_for_latency = true, backlog = 1024)

[ GitHub ]

  
# File 'lib/puma/server.rb', line 692

def add_tcp_listener(host, port, optimize_for_latency = true, backlog = 1024)
  @binder.add_tcp_listener host, port, optimize_for_latency, backlog
end

#add_unix_listener(path, umask = nil, mode = nil, backlog = 1024)

[ GitHub ]

  
# File 'lib/puma/server.rb', line 701

def add_unix_listener(path, umask = nil, mode = nil, backlog = 1024)
  @binder.add_unix_listener path, umask, mode, backlog
end

#begin_restart(sync = false)

[ GitHub ]

  
# File 'lib/puma/server.rb', line 648

def begin_restart(sync=false)
  notify_safely(RESTART_COMMAND)
  @thread.join if @thread && sync
end

#client_error(e, client, requests = 1)

Handle various error types thrown by Client I/O operations.

[ GitHub ]

  
# File 'lib/puma/server.rb', line 542

def client_error(e, client, requests = 1)
  # Swallow, do not log
  return if [ConnectionError, EOFError].include?(e.class)

  case e
  when MiniSSL::SSLError
    lowlevel_error(e, client.env)
    @log_writer.ssl_error e, client.io
  when HttpParserError
    response_to_error(client, requests, e, 400)
    @log_writer.parse_error e, client
  when HttpParserError501
    response_to_error(client, requests, e, 501)
    @log_writer.parse_error e, client
  else
    response_to_error(client, requests, e, 500)
    @log_writer.unknown_error e, nil, "Read"
  end
end

#closed_socket?(socket)

See additional method definition at line 192.

[ GitHub ]

  
# File 'lib/puma/server.rb', line 209

def closed_socket?(socket)
  skt = socket.to_io
  return false unless skt.kind_of?(TCPSocket) && @precheck_closing

  begin
    tcp_info = skt.getsockopt(Socket::IPPROTO_TCP, Socket::TCP_INFO)
  rescue IOError, SystemCallError
    Puma::Util.purge_interrupt_queue
    @precheck_closing = false
    false
  else
    state = tcp_info.unpack(UNPACK_TCP_STATE_FROM_TCP_INFO)[0]
    # TIME_WAIT: 6, CLOSE: 7, CLOSE_WAIT: 8, LAST_ACK: 9, CLOSING: 11
    (state >= 6 && state <= 9) || state == 11
  end
end

#cork_socket(socket)

6 == Socket::IPPROTO_TCP 3 == TCP_CORK 1/0 == turn on/off

See additional method definition at line 164.

[ GitHub ]

  
# File 'lib/puma/server.rb', line 182

def cork_socket(socket)
  skt = socket.to_io
  begin
    skt.setsockopt(Socket::IPPROTO_TCP, Socket::TCP_CORK, 1) if skt.kind_of? TCPSocket
  rescue IOError, SystemCallError
    Puma::Util.purge_interrupt_queue
  end
end

#graceful_shutdown

Wait for all outstanding requests to finish.

[ GitHub ]

  
# File 'lib/puma/server.rb', line 591

def graceful_shutdown
  if options[:shutdown_debug]
    threads = Thread.list
    total = threads.size

    pid = Process.pid

    $stdout.syswrite "#{pid}: === Begin thread backtrace dump ===\n"

    threads.each_with_index do |t,i|
      $stdout.syswrite "#{pid}: Thread #{i+1}/#{total}: #{t.inspect}\n"
      $stdout.syswrite "#{pid}: #{t.backtrace.join("\n#{pid}: ")}\n\n"
    end
    $stdout.syswrite "#{pid}: === End thread backtrace dump ===\n"
  end

  if @status != :restart
    @binder.close
  end

  if @thread_pool
    if timeout = options[:force_shutdown_after]
      @thread_pool.shutdown timeout.to_f
    else
      @thread_pool.shutdown
    end
  end
end

#halt(sync = false)

[ GitHub ]

  
# File 'lib/puma/server.rb', line 643

def halt(sync=false)
  notify_safely(HALT_COMMAND)
  @thread.join if @thread && sync
end

#handle_check

This method is for internal use only.
[ GitHub ]

  
# File 'lib/puma/server.rb', line 431

def handle_check
  cmd = @check.read(1)

  case cmd
  when STOP_COMMAND
    @status = :stop
    return true
  when HALT_COMMAND
    @status = :halt
    return true
  when RESTART_COMMAND
    @status = :restart
    return true
  end

  false
end

#handle_servers

[ GitHub ]

  
# File 'lib/puma/server.rb', line 320

def handle_servers
  begin
    check = @check
    sockets = [check] + @binder.ios
    pool = @thread_pool
    queue_requests = @queue_requests
    drain = options[:drain_on_shutdown] ? 0 : nil
    max_flt = @max_threads.to_f

    addr_send_name, addr_value = case options[:remote_address]
    when :value
      [:peerip=, options[:remote_address_value]]
    when :header
      [:remote_addr_header=, options[:remote_address_header]]
    when :proxy_protocol
      [:expect_proxy_proto=, options[:remote_address_proxy_protocol]]
    else
      [nil, nil]
    end

    while @status == :run || (drain && shutting_down?)
      begin
        ios = IO.select sockets, nil, nil, (shutting_down? ? 0 : @idle_timeout)
        unless ios
          unless shutting_down?
            @idle_timeout_reached = true

            if @clustered
              @worker_write << "#{PipeRequest::PIPE_IDLE}#{Process.pid}\n" rescue nil
              next
            else
              @log_writer.log "- Idle timeout reached"
              @status = :stop
            end
          end

          break
        end

        if @idle_timeout_reached && @clustered
          @idle_timeout_reached = false
          @worker_write << "#{PipeRequest::PIPE_IDLE}#{Process.pid}\n" rescue nil
        end

        ios.first.each do |sock|
          if sock == check
            break if handle_check
          else
            # if ThreadPool out_of_band code is running, we don't want to add
            # clients until the code is finished.
            sleep 0.001 while pool.out_of_band_running

            # only use delay when clustered and busy
            if pool.busy_threads >= @max_threads
              if @clustered
                delay = 0.0001 * ((@reactor&.reactor_size || 0) + pool.busy_threads * 1.5)/max_flt
                sleep delay
              else
                # use small sleep for busy single worker
                sleep 0.0001
              end
            end

            io = begin
              sock.accept_nonblock
            rescue IO::WaitReadable
              next
            end
            drain += 1 if shutting_down?
            pool << Client.new(io, @binder.env(sock)).tap { |c|
              c.listener = sock
              c.http_content_length_limit = @http_content_length_limit
              c.send(addr_send_name, addr_value) if addr_value
            }
          end
        end
      rescue IOError, Errno::EBADF
        # In the case that any of the sockets are unexpectedly close.
        raise
      rescue StandardError => e
        @log_writer.unknown_error e, nil, "Listen loop"
      end
    end

    @log_writer.debug "Drained #{drain} additional connections." if drain
    @events.fire :state, @status

    if queue_requests
      @queue_requests = false
      @reactor.shutdown
    end

    graceful_shutdown if @status == :stop || @status == :restart
  rescue Exception => e
    @log_writer.unknown_error e, nil, "Exception handling servers"
  ensure
    # Errno::EBADF is infrequently raised
    [@check, @notify].each do |io|
      begin
        io.close unless io.closed?
      rescue Errno::EBADF
      end
    end
    @notify = nil
    @check = nil
  end

  @events.fire :state, :done
end

#inherit_binder(bind)

[ GitHub ]

  
# File 'lib/puma/server.rb', line 131

def inherit_binder(bind)
  @binder = bind
end

#lowlevel_error(e, env, status = 500)

A fallback rack response if @app raises as exception.

[ GitHub ]

  
# File 'lib/puma/server.rb', line 564

def lowlevel_error(e, env, status=500)
  if handler = options[:lowlevel_error_handler]
    if handler.arity == 1
      return handler.call(e)
    elsif handler.arity == 2
      return handler.call(e, env)
    else
      return handler.call(e, env, status)
    end
  end

  if @leak_stack_on_error
    backtrace = e.backtrace.nil? ? '<no backtrace available>' : e.backtrace.join("\n")
    [status, {}, ["Puma caught this error: #{e.message} (#{e.class})\n#{backtrace}"]]
  else
    [status, {}, [""]]
  end
end

#notify_safely(message) (private)

[ GitHub ]

  
# File 'lib/puma/server.rb', line 620

def notify_safely(message)
  @notify << message
rescue IOError, NoMethodError, Errno::EPIPE, Errno::EBADF
  # The server, in another thread, is shutting down
  Puma::Util.purge_interrupt_queue
rescue RuntimeError => e
  # Temporary workaround for https://bugs.ruby-lang.org/issues/13239
  if e.message.include?('IOError')
    Puma::Util.purge_interrupt_queue
  else
    raise e
  end
end

#process_client(client)

Given a connection on client, handle the incoming requests, or queue the connection in the Reactor if no request is available.

This method is called from a ThreadPool worker thread.

This method supports HTTP Keep-Alive so it may, depending on if the client indicates that it supports keep alive, wait for another request before returning.

Return true if one or more requests were processed.

[ GitHub ]

  
# File 'lib/puma/server.rb', line 459

def process_client(client)
  # Advertise this server into the thread
  Thread.current.puma_server = self

  clean_thread_locals = options[:clean_thread_locals]
  close_socket = true

  requests = 0

  begin
    if @queue_requests && !client.eagerly_finish

      client.set_timeout(@first_data_timeout)
      if @reactor.add client
        close_socket = false
        return false
      end
    end

    with_force_shutdown(client) do
      client.finish(@first_data_timeout)
    end

    @requests_count += 1
    case handle_request(client, requests + 1)
    when false
    when :async
      close_socket = false
    when true
      ThreadPool.clean_thread_locals if clean_thread_locals

      requests += 1

      client.reset

      # This indicates data exists in the client read buffer and there may be
      # additional requests on it, so process them
      next_request_ready = if client.has_back_to_back_requests?
        with_force_shutdown(client) { client.process_back_to_back_requests }
      else
        nil
      end

      if next_request_ready
        @thread_pool << client
        close_socket = false
      elsif @queue_requests
        client.set_timeout @persistent_timeout
        if @reactor.add client
          close_socket = false
        end
      end
    end
    true
  rescue StandardError => e
    client_error(e, client, requests)
    # The ensure tries to close client down
    requests > 0
  ensure
    client.io_buffer.reset

    begin
      client.close if close_socket
    rescue IOError, SystemCallError
      Puma::Util.purge_interrupt_queue
      # Already closed
    rescue StandardError => e
      @log_writer.unknown_error e, nil, "Client"
    end
  end
end

#reactor_wakeup(client)

This method is called from the Reactor thread when a queued Client receives data, times out, or when the Reactor is shutting down.

It is responsible for ensuring that a request has been completely received before it starts to be processed by the ThreadPool. This may be known as read buffering. If read buffering is not done, and no other read buffering is performed (such as by an application server such as nginx) then the application would be subject to a slow client attack.

For a graphical representation of how the request buffer works see architecture.md.

The method checks to see if it has the full header and body with the Client#try_to_finish method. If the full request has been sent, then the request is passed to the ThreadPool (‘@thread_pool << client`) so that a “worker thread” can pick up the request and begin to execute application logic. The Client is then removed from the reactor (return true).

If a client object times out, a 408 response is written, its connection is closed, and the object is removed from the reactor (return true).

If the Reactor is shutting down, all Clients are either timed out or passed to the ThreadPool, depending on their current state (#can_close?).

Otherwise, if the full request is not ready then the client will remain in the reactor (return false). When the client sends more data to the socket the Client object will wake up and again be checked to see if it’s ready to be passed to the thread pool.

[ GitHub ]

  
# File 'lib/puma/server.rb', line 304

def reactor_wakeup(client)
  shutdown = !@queue_requests
  if client.try_to_finish || (shutdown && !client.can_close?)
    @thread_pool << client
  elsif shutdown || client.timeout == 0
    client.timeout!
  else
    client.set_timeout(@first_data_timeout)
    false
  end
rescue StandardError => e
  client_error(e, client)
  client.close
  true
end

#reset_max

[ GitHub ]

  
# File 'lib/puma/server.rb', line 683

def reset_max
  @reactor.reactor_max = 0
  @thread_pool.reset_max
end

#response_to_error(client, requests, err, status_code) (private)

[ GitHub ]

  
# File 'lib/puma/server.rb', line 583

def response_to_error(client, requests, err, status_code)
  status, headers, res_body = lowlevel_error(err, client.env, status_code)
  prepare_response(status, headers, res_body, requests, client)
end

#run(background = true, thread_name: 'srv')

Runs the server.

If background is true (the default) then a thread is spun up in the background to handle requests. Otherwise requests are handled synchronously.

[ GitHub ]

  
# File 'lib/puma/server.rb', line 247

def run(background=true, thread_name: 'srv')
  BasicSocket.do_not_reverse_lookup = true

  @events.fire :state, :booting

  @status = :run

  @thread_pool = ThreadPool.new(thread_name, options) { |client| process_client client }

  if @queue_requests
    @reactor = Reactor.new(@io_selector_backend) { |c| reactor_wakeup c }
    @reactor.run
  end

  @thread_pool.auto_reap! if options[:reaping_time]
  @thread_pool.auto_trim! if @min_threads != @max_threads && options[:auto_trim_time]

  @check, @notify = Puma::Util.pipe unless @notify

  @events.fire :state, :running

  if background
    @thread = Thread.new do
      Puma.set_thread_name thread_name
      handle_servers
    end
    return @thread
  else
    handle_servers
  end
end

#stop(sync = false)

Stops the acceptor thread and then causes the worker threads to finish off the request queue before finally exiting.

[ GitHub ]

  
# File 'lib/puma/server.rb', line 638

def stop(sync=false)
  notify_safely(STOP_COMMAND)
  @thread.join if @thread && sync
end

#uncork_socket(socket)

See additional method definition at line 173.

[ GitHub ]

  
# File 'lib/puma/server.rb', line 185

def uncork_socket(socket)
  skt = socket.to_io
  begin
    skt.setsockopt(Socket::IPPROTO_TCP, Socket::TCP_CORK, 0) if skt.kind_of? TCPSocket
  rescue IOError, SystemCallError
    Puma::Util.purge_interrupt_queue
  end
end

#with_force_shutdown(client, &block)

Triggers a client timeout if the thread-pool shuts down during execution of the provided block.

[ GitHub ]

  
# File 'lib/puma/server.rb', line 533

def with_force_shutdown(client, &block)
  @thread_pool.with_force_shutdown(&block)
rescue ThreadPool::ForceShutdown
  client.timeout!
end