PStore-based session storage class.
This builds upon the top-level PStore
class provided by the library file pstore.rb. Session
data is marshalled and stored in a file. File
locking and transaction services are provided.
Restore session state from the session’s PStore
file.
Returns the session state as a hash.
Create a new CGI::Session::PStore
instance
This constructor is used internally by CGI::Session
. The user does not generally need to call it directly.
session
is the session for which this instance is being created. The session id must only contain alphanumeric characters; automatically generated session ids observe this requirement.
option
is a hash of options for the initializer. The following options are recognised:
the directory to use for storing the PStore
file. Defaults to Dir::tmpdir
(generally “/tmp” on Unix systems).
the prefix to add to the session id when generating the filename for this session’s PStore
file. Defaults to the empty string.
This session’s PStore
file will be created if it does not exist, or opened if it does.
Save session state to the session’s PStore
file.
Update and close the session’s PStore
file.
Close and delete the session’s PStore
file.
Raised in case of a stack overflow.
def me_myself_and_i me_myself_and_i end me_myself_and_i
raises the exception:
SystemStackError: stack level too deep
Raised when an invalid operation is attempted on a Fiber
, in particular when attempting to call/resume a dead fiber, attempting to yield from the root fiber, or calling a fiber across threads.
fiber = Fiber.new{} fiber.resume #=> nil fiber.resume #=> FiberError: dead fiber called
A class which allows both internal and external iteration.
An Enumerator
can be created by the following methods.
Most methods have two forms: a block form where the contents are evaluated for each item in the enumeration, and a non-block form which returns a new Enumerator
wrapping the iteration.
enumerator = %w(one two three).each puts enumerator.class # => Enumerator enumerator.each_with_object("foo") do |item, obj| puts "#{obj}: #{item}" end # foo: one # foo: two # foo: three enum_with_obj = enumerator.each_with_object("foo") puts enum_with_obj.class # => Enumerator enum_with_obj.each do |item, obj| puts "#{obj}: #{item}" end # foo: one # foo: two # foo: three
This allows you to chain Enumerators together. For example, you can map a list’s elements to strings containing the index and the element as a string via:
puts %w[foo bar baz].map.with_index { |w, i| "#{i}:#{w}" } # => ["0:foo", "1:bar", "2:baz"]
An Enumerator
can also be used as an external iterator. For example, Enumerator#next
returns the next value of the iterator or raises StopIteration
if the Enumerator
is at the end.
e = [1,2,3].each # returns an enumerator object. puts e.next # => 1 puts e.next # => 2 puts e.next # => 3 puts e.next # raises StopIteration
next
, next_values
, peek
, and peek_values
are the only methods which use external iteration (and Array#zip(Enumerable-not-Array)
which uses next
internally).
These methods do not affect other internal enumeration methods, unless the underlying iteration method itself has side-effect, e.g. IO#each_line
.
FrozenError
will be raised if these methods are called against a frozen enumerator. Since rewind
and feed
also change state for external iteration, these methods may raise FrozenError
too.
External iteration differs significantly from internal iteration due to using a Fiber:
The Fiber
adds some overhead compared to internal enumeration.
The stacktrace will only include the stack from the Enumerator
, not above.
Fiber-local variables are not inherited inside the Enumerator
Fiber
, which instead starts with no Fiber-local variables.
Fiber
storage variables are inherited and are designed to handle Enumerator
Fibers. Assigning to a Fiber
storage variable only affects the current Fiber
, so if you want to change state in the caller Fiber
of the Enumerator
Fiber
, you need to use an extra indirection (e.g., use some object in the Fiber
storage variable and mutate some ivar of it).
Concretely:
Thread.current[:fiber_local] = 1 Fiber[:storage_var] = 1 e = Enumerator.new do |y| p Thread.current[:fiber_local] # for external iteration: nil, for internal iteration: 1 p Fiber[:storage_var] # => 1, inherited Fiber[:storage_var] += 1 y << 42 end p e.next # => 42 p Fiber[:storage_var] # => 1 (it ran in a different Fiber) e.each { p _1 } p Fiber[:storage_var] # => 2 (it ran in the same Fiber/"stack" as the current Fiber)
You can use an external iterator to implement an internal iterator as follows:
def ext_each(e) while true begin vs = e.next_values rescue StopIteration return $!.result end y = yield(*vs) e.feed y end end o = Object.new def o.each puts yield puts yield(1) puts yield(1, 2) 3 end # use o.each as an internal iterator directly. puts o.each {|*x| puts x; [:b, *x] } # => [], [:b], [1], [:b, 1], [1, 2], [:b, 1, 2], 3 # convert o.each to an external iterator for # implementing an internal iterator. puts ext_each(o.to_enum) {|*x| puts x; [:b, *x] } # => [], [:b], [1], [:b, 1], [1, 2], [:b, 1, 2], 3
Raised to stop the iteration, in particular by Enumerator#next
. It is rescued by Kernel#loop
.
loop do puts "Hello" raise StopIteration puts "World" end puts "Done!"
produces:
Hello Done!
The most standard error types are subclasses of StandardError
. A rescue clause without an explicit Exception
class will rescue all StandardErrors (and only those).
def foo raise "Oups" end foo rescue "Hello" #=> "Hello"
On the other hand:
require 'does/not/exist' rescue "Hi"
raises the exception:
LoadError: no such file to load -- does/not/exist
Raised when memory allocation fails.
SystemCallError
is the base class for all low-level platform-dependent errors.
The errors available on the current platform are subclasses of SystemCallError
and are defined in the Errno
module.
File.open("does/not/exist")
raises the exception:
Errno::ENOENT: No such file or directory - does/not/exist
Use the Monitor
class when you want to have a lock object for blocks with mutual exclusion.
require 'monitor' lock = Monitor.new lock.synchronize do # exclusive access end
This library provides three different ways to delegate method calls to an object. The easiest to use is SimpleDelegator
. Pass an object to the constructor and all methods supported by the object will be delegated. This object can be changed later.
Going a step further, the top level DelegateClass method allows you to easily setup delegation through class inheritance. This is considerably more flexible and thus probably the most common use for this library.
Finally, if you need full control over the delegation scheme, you can inherit from the abstract class Delegator
and customize as needed. (If you find yourself needing this control, have a look at Forwardable
which is also in the standard library. It may suit your needs better.)
SimpleDelegator’s implementation serves as a nice example of the use of Delegator:
require 'delegate' class SimpleDelegator < Delegator def __getobj__ @delegate_sd_obj # return object we are delegating to, required end def __setobj__(obj) @delegate_sd_obj = obj # change delegation object, # a feature we're providing end end
Be advised, RDoc
will not detect delegated methods.
A concrete implementation of Delegator
, this class provides the means to delegate all supported method calls to the object passed into the constructor and even to change the object being delegated to at a later time with __setobj__
.
class User def born_on Date.new(1989, 9, 10) end end require 'delegate' class UserDecorator < SimpleDelegator def birth_year born_on.year end end decorated_user = UserDecorator.new(User.new) decorated_user.birth_year #=> 1989 decorated_user.__getobj__ #=> #<User: ...>
A SimpleDelegator
instance can take advantage of the fact that SimpleDelegator
is a subclass of Delegator
to call super
to have methods called on the object being delegated to.
class SuperArray < SimpleDelegator def [](*args) super + 1 end end SuperArray.new([1])[0] #=> 2
Here’s a simple example that takes advantage of the fact that SimpleDelegator’s delegation object can be changed at any time.
class Stats def initialize @source = SimpleDelegator.new([]) end def stats(records) @source.__setobj__(records) "Elements: #{@source.size}\n" + " Non-Nil: #{@source.compact.size}\n" + " Unique: #{@source.uniq.size}\n" end end s = Stats.new puts s.stats(%w{James Edward Gray II}) puts puts s.stats([1, 2, 3, nil, 4, 5, 1, 2])
Prints:
Elements: 4 Non-Nil: 4 Unique: 4 Elements: 8 Non-Nil: 7 Unique: 6
Ractor is an Actor-model abstraction for Ruby
that provides thread-safe parallel execution.
Ractor.new
makes a new Ractor, which can run in parallel.
# The simplest ractor r = Ractor.new {puts "I am in Ractor!"} r.take # wait for it to finish # Here, "I am in Ractor!" is printed
Ractors do not share all objects with each other. There are two main benefits to this: across ractors, thread-safety concerns such as data-races and race-conditions are not possible. The other benefit is parallelism.
To achieve this, object sharing is limited across ractors. For example, unlike in threads, ractors can’t access all the objects available in other ractors. Even objects normally available through variables in the outer scope are prohibited from being used across ractors.
a = 1 r = Ractor.new {puts "I am in Ractor! a=#{a}"} # fails immediately with # ArgumentError (can not isolate a Proc because it accesses outer variables (a).)
The object must be explicitly shared:
a = 1 r = Ractor.new(a) { |a1| puts "I am in Ractor! a=#{a1}"}
On CRuby (the default implementation), Global Virtual Machine Lock (GVL) is held per ractor, so ractors can perform in parallel without locking each other. This is unlike the situation with threads on CRuby.
Instead of accessing shared state, objects should be passed to and from ractors by sending and receiving them as messages.
a = 1 r = Ractor.new do a_in_ractor = receive # receive blocks until somebody passes a message puts "I am in Ractor! a=#{a_in_ractor}" end r.send(a) # pass it r.take # Here, "I am in Ractor! a=1" is printed
There are two pairs of methods for sending/receiving messages:
Ractor#send
and Ractor.receive
for when the sender knows the receiver (push);
Ractor.yield
and Ractor#take
for when the receiver knows the sender (pull);
In addition to that, any arguments passed to Ractor.new
are passed to the block and available there as if received by Ractor.receive
, and the last block value is sent outside of the ractor as if sent by Ractor.yield
.
A little demonstration of a classic ping-pong:
server = Ractor.new(name: "server") do puts "Server starts: #{self.inspect}" puts "Server sends: ping" Ractor.yield 'ping' # The server doesn't know the receiver and sends to whoever interested received = Ractor.receive # The server doesn't know the sender and receives from whoever sent puts "Server received: #{received}" end client = Ractor.new(server) do |srv| # The server is sent to the client, and available as srv puts "Client starts: #{self.inspect}" received = srv.take # The client takes a message from the server puts "Client received from " \ "#{srv.inspect}: #{received}" puts "Client sends to " \ "#{srv.inspect}: pong" srv.send 'pong' # The client sends a message to the server end [client, server].each(&:take) # Wait until they both finish
This will output something like:
Server starts: #<Ractor:#2 server test.rb:1 running> Server sends: ping Client starts: #<Ractor:#3 test.rb:8 running> Client received from #<Ractor:#2 server test.rb:1 blocking>: ping Client sends to #<Ractor:#2 server test.rb:1 blocking>: pong Server received: pong
Ractors receive their messages via the incoming port, and send them to the outgoing port. Either one can be disabled with Ractor#close_incoming
and Ractor#close_outgoing
, respectively. When a ractor terminates, its ports are closed automatically.
When an object is sent to and from a ractor, it’s important to understand whether the object is shareable or unshareable. Most Ruby
objects are unshareable objects. Even frozen objects can be unshareable if they contain (through their instance variables) unfrozen objects.
Shareable objects are those which can be used by several threads without compromising thread-safety, for example numbers, true
and false
. Ractor.shareable?
allows you to check this, and Ractor.make_shareable
tries to make the object shareable if it’s not already, and gives an error if it can’t do it.
Ractor.shareable?(1) #=> true -- numbers and other immutable basic values are shareable Ractor.shareable?('foo') #=> false, unless the string is frozen due to # frozen_string_literal: true Ractor.shareable?('foo'.freeze) #=> true Ractor.shareable?([Object.new].freeze) #=> false, inner object is unfrozen ary = ['hello', 'world'] ary.frozen? #=> false ary[0].frozen? #=> false Ractor.make_shareable(ary) ary.frozen? #=> true ary[0].frozen? #=> true ary[1].frozen? #=> true
When a shareable object is sent (via send
or Ractor.yield
), no additional processing occurs on it. It just becomes usable by both ractors. When an unshareable object is sent, it can be either copied or moved. The first is the default, and it copies the object fully by deep cloning (Object#clone
) the non-shareable parts of its structure.
data = ['foo', 'bar'.freeze] r = Ractor.new do data2 = Ractor.receive puts "In ractor: #{data2.object_id}, #{data2[0].object_id}, #{data2[1].object_id}" end r.send(data) r.take puts "Outside : #{data.object_id}, #{data[0].object_id}, #{data[1].object_id}"
This will output something like:
In ractor: 340, 360, 320 Outside : 380, 400, 320
Note that the object ids of the array and the non-frozen string inside the array have changed in the ractor because they are different objects. The second array’s element, which is a shareable frozen string, is the same object.
Deep cloning of objects may be slow, and sometimes impossible. Alternatively, move: true
may be used during sending. This will move the unshareable object to the receiving ractor, making it inaccessible to the sending ractor.
data = ['foo', 'bar'] r = Ractor.new do data_in_ractor = Ractor.receive puts "In ractor: #{data_in_ractor.object_id}, #{data_in_ractor[0].object_id}" end r.send(data, move: true) r.take puts "Outside: moved? #{Ractor::MovedObject === data}" puts "Outside: #{data.inspect}"
This will output:
In ractor: 100, 120 Outside: moved? true test.rb:9:in `method_missing': can not send any methods to a moved object (Ractor::MovedError)
Notice that even inspect
(and more basic methods like __id__
) is inaccessible on a moved object.
Class
and Module
objects are shareable so the class/module definitions are shared between ractors. Ractor objects are also shareable. All operations on shareable objects are thread-safe, so the thread-safety property will be kept. We can not define mutable shareable objects in Ruby
, but C extensions can introduce them.
It is prohibited to access (get) instance variables of shareable objects in other ractors if the values of the variables aren’t shareable. This can occur because modules/classes are shareable, but they can have instance variables whose values are not. In non-main ractors, it’s also prohibited to set instance variables on classes/modules (even if the value is shareable).
class C class << self attr_accessor :tricky end end C.tricky = "unshareable".dup r = Ractor.new(C) do |cls| puts "I see #{cls}" puts "I can't see #{cls.tricky}" cls.tricky = true # doesn't get here, but this would also raise an error end r.take # I see C # can not access instance variables of classes/modules from non-main Ractors (RuntimeError)
Ractors can access constants if they are shareable. The main Ractor is the only one that can access non-shareable constants.
GOOD = 'good'.freeze BAD = 'bad'.dup r = Ractor.new do puts "GOOD=#{GOOD}" puts "BAD=#{BAD}" end r.take # GOOD=good # can not access non-shareable objects in constant Object::BAD by non-main Ractor. (NameError) # Consider the same C class from above r = Ractor.new do puts "I see #{C}" puts "I can't see #{C.tricky}" end r.take # I see C # can not access instance variables of classes/modules from non-main Ractors (RuntimeError)
See also the description of # shareable_constant_value
pragma in Comments syntax explanation.
Each ractor has its own main Thread
. New threads can be created from inside ractors (and, on CRuby, they share the GVL with other threads of this ractor).
r = Ractor.new do a = 1 Thread.new {puts "Thread in ractor: a=#{a}"}.join end r.take # Here "Thread in ractor: a=1" will be printed
In the examples below, sometimes we use the following method to wait for ractors that are not currently blocked to finish (or to make progress).
def wait sleep(0.1) end
It is **only for demonstration purposes** and shouldn’t be used in a real code. Most of the time, take
is used to wait for ractors to finish.
See Ractor design doc for more details.
Raised when given an invalid regexp expression.
Regexp.new("?")
raises the exception:
RegexpError: target of repeat operator is not specified: /?/
Raised when an invalid operation is attempted on a thread.
For example, when no other thread has been started:
Thread.stop
This will raises the following exception:
ThreadError: stopping only thread note: use sleep to stop forever
In concurrent programming, a monitor is an object or module intended to be used safely by more than one thread. The defining characteristic of a monitor is that its methods are executed with mutual exclusion. That is, at each point in time, at most one thread may be executing any of its methods. This mutual exclusion greatly simplifies reasoning about the implementation of monitors compared to reasoning about parallel code that updates a data structure.
You can read more about the general principles on the Wikipedia page for Monitors.
require 'monitor.rb' buf = [] buf.extend(MonitorMixin) empty_cond = buf.new_cond # consumer Thread.start do loop do buf.synchronize do empty_cond.wait_while { buf.empty? } print buf.shift end end end # producer while line = ARGF.gets buf.synchronize do buf.push(line) empty_cond.signal end end
The consumer thread waits for the producer thread to push a line to buf while buf.empty?
. The producer thread (main thread) reads a line from ARGF
and pushes it into buf then calls empty_cond.signal
to notify the consumer thread of new data.
Class
include require 'monitor' class SynchronizedArray < Array include MonitorMixin def initialize(*args) super(*args) end alias :old_shift :shift alias :old_unshift :unshift def shift(n=1) self.synchronize do self.old_shift(n) end end def unshift(item) self.synchronize do self.old_unshift(item) end end # other methods ... end
SynchronizedArray
implements an Array
with synchronized access to items. This Class
is implemented as subclass of Array
which includes the MonitorMixin
module.
This exception is raised if a generator or unparser error occurs.
Response class for Request Header Fields Too Large
responses (status code 431).
An individual header field is too large, or all the header fields collectively, are too large.
References: