Results for: "to_proc"

Subclass of Zlib::Error. This error is raised when the zlib stream is currently in progress.

For example:

inflater = Zlib::Inflate.new
inflater.inflate(compressed) do
  inflater.inflate(compressed) # Raises Zlib::InProgressError
end

Response class for Processing responses (status code 102).

The Processing response indicates that the server has received and is processing the request, but no response is available yet.

References:

Response class for Unprocessable Entity responses (status code 422).

The request was well-formed but had semantic errors.

References:

Represents assigning to a constant using an operator that isn’t ‘=`.

Target += value
^^^^^^^^^^^^^^^
No documentation available

A class which allows both internal and external iteration.

An Enumerator can be created by the following methods.

Most methods have two forms: a block form where the contents are evaluated for each item in the enumeration, and a non-block form which returns a new Enumerator wrapping the iteration.

enumerator = %w(one two three).each
puts enumerator.class # => Enumerator

enumerator.each_with_object("foo") do |item, obj|
  puts "#{obj}: #{item}"
end

# foo: one
# foo: two
# foo: three

enum_with_obj = enumerator.each_with_object("foo")
puts enum_with_obj.class # => Enumerator

enum_with_obj.each do |item, obj|
  puts "#{obj}: #{item}"
end

# foo: one
# foo: two
# foo: three

This allows you to chain Enumerators together. For example, you can map a list’s elements to strings containing the index and the element as a string via:

puts %w[foo bar baz].map.with_index { |w, i| "#{i}:#{w}" }
# => ["0:foo", "1:bar", "2:baz"]

External Iteration

An Enumerator can also be used as an external iterator. For example, Enumerator#next returns the next value of the iterator or raises StopIteration if the Enumerator is at the end.

e = [1,2,3].each   # returns an enumerator object.
puts e.next   # => 1
puts e.next   # => 2
puts e.next   # => 3
puts e.next   # raises StopIteration

next, next_values, peek, and peek_values are the only methods which use external iteration (and Array#zip(Enumerable-not-Array) which uses next internally).

These methods do not affect other internal enumeration methods, unless the underlying iteration method itself has side-effect, e.g. IO#each_line.

FrozenError will be raised if these methods are called against a frozen enumerator. Since rewind and feed also change state for external iteration, these methods may raise FrozenError too.

External iteration differs significantly from internal iteration due to using a Fiber:

Concretely:

Thread.current[:fiber_local] = 1
Fiber[:storage_var] = 1
e = Enumerator.new do |y|
  p Thread.current[:fiber_local] # for external iteration: nil, for internal iteration: 1
  p Fiber[:storage_var] # => 1, inherited
  Fiber[:storage_var] += 1
  y << 42
end

p e.next # => 42
p Fiber[:storage_var] # => 1 (it ran in a different Fiber)

e.each { p _1 }
p Fiber[:storage_var] # => 2 (it ran in the same Fiber/"stack" as the current Fiber)

Convert External Iteration to Internal Iteration

You can use an external iterator to implement an internal iterator as follows:

def ext_each(e)
  while true
    begin
      vs = e.next_values
    rescue StopIteration
      return $!.result
    end
    y = yield(*vs)
    e.feed y
  end
end

o = Object.new

def o.each
  puts yield
  puts yield(1)
  puts yield(1, 2)
  3
end

# use o.each as an internal iterator directly.
puts o.each {|*x| puts x; [:b, *x] }
# => [], [:b], [1], [:b, 1], [1, 2], [:b, 1, 2], 3

# convert o.each to an external iterator for
# implementing an internal iterator.
puts ext_each(o.to_enum) {|*x| puts x; [:b, *x] }
# => [], [:b], [1], [:b, 1], [1, 2], [:b, 1, 2], 3

Raised to stop the iteration, in particular by Enumerator#next. It is rescued by Kernel#loop.

loop do
  puts "Hello"
  raise StopIteration
  puts "World"
end
puts "Done!"

produces:

Hello
Done!

Use the Monitor class when you want to have a lock object for blocks with mutual exclusion.

require 'monitor'

lock = Monitor.new
lock.synchronize do
  # exclusive access
end

This library provides three different ways to delegate method calls to an object. The easiest to use is SimpleDelegator. Pass an object to the constructor and all methods supported by the object will be delegated. This object can be changed later.

Going a step further, the top level DelegateClass method allows you to easily setup delegation through class inheritance. This is considerably more flexible and thus probably the most common use for this library.

Finally, if you need full control over the delegation scheme, you can inherit from the abstract class Delegator and customize as needed. (If you find yourself needing this control, have a look at Forwardable which is also in the standard library. It may suit your needs better.)

SimpleDelegator’s implementation serves as a nice example of the use of Delegator:

require 'delegate'

class SimpleDelegator < Delegator
  def __getobj__
    @delegate_sd_obj # return object we are delegating to, required
  end

  def __setobj__(obj)
    @delegate_sd_obj = obj # change delegation object,
                           # a feature we're providing
  end
end

Notes

Be advised, RDoc will not detect delegated methods.

A concrete implementation of Delegator, this class provides the means to delegate all supported method calls to the object passed into the constructor and even to change the object being delegated to at a later time with __setobj__.

class User
  def born_on
    Date.new(1989, 9, 10)
  end
end

require 'delegate'

class UserDecorator < SimpleDelegator
  def birth_year
    born_on.year
  end
end

decorated_user = UserDecorator.new(User.new)
decorated_user.birth_year  #=> 1989
decorated_user.__getobj__  #=> #<User: ...>

A SimpleDelegator instance can take advantage of the fact that SimpleDelegator is a subclass of Delegator to call super to have methods called on the object being delegated to.

class SuperArray < SimpleDelegator
  def [](*args)
    super + 1
  end
end

SuperArray.new([1])[0]  #=> 2

Here’s a simple example that takes advantage of the fact that SimpleDelegator’s delegation object can be changed at any time.

class Stats
  def initialize
    @source = SimpleDelegator.new([])
  end

  def stats(records)
    @source.__setobj__(records)

    "Elements:  #{@source.size}\n" +
    " Non-Nil:  #{@source.compact.size}\n" +
    "  Unique:  #{@source.uniq.size}\n"
  end
end

s = Stats.new
puts s.stats(%w{James Edward Gray II})
puts
puts s.stats([1, 2, 3, nil, 4, 5, 1, 2])

Prints:

Elements:  4
 Non-Nil:  4
  Unique:  4

Elements:  8
 Non-Nil:  7
  Unique:  6

PStore implements a file based persistence mechanism based on a Hash. User code can store hierarchies of Ruby objects (values) into the data store by name (keys). An object hierarchy may be just a single object. User code may later read values back from the data store or even update data, as needed.

The transactional behavior ensures that any changes succeed or fail together. This can be used to ensure that the data store is not left in a transitory state, where some values were updated but others were not.

Behind the scenes, Ruby objects are stored to the data store file with Marshal. That carries the usual limitations. Proc objects cannot be marshalled, for example.

There are three important concepts here (details at the links):

About the Examples

Examples on this page need a store that has known properties. They can get a new (and populated) store by calling thus:

example_store do |store|
  # Example code using store goes here.
end

All we really need to know about example_store is that it yields a fresh store with a known population of entries; its implementation:

require 'pstore'
require 'tempfile'
# Yield a pristine store for use in examples.
def example_store
  # Create the store in a temporary file.
  Tempfile.create do |file|
    store = PStore.new(file)
    # Populate the store.
    store.transaction do
      store[:foo] = 0
      store[:bar] = 1
      store[:baz] = 2
    end
    yield store
  end
end

The Store

The contents of the store are maintained in a file whose path is specified when the store is created (see PStore.new). The objects are stored and retrieved using module Marshal, which means that certain objects cannot be added to the store; see Marshal::dump.

Entries

A store may have any number of entries. Each entry has a key and a value, just as in a hash:

Transactions

The Transaction Block

The block given with a call to method transaction# contains a transaction, which consists of calls to PStore methods that read from or write to the store (that is, all PStore methods except transaction itself, path, and Pstore.new):

example_store do |store|
  store.transaction do
    store.keys # => [:foo, :bar, :baz]
    store[:bat] = 3
    store.keys # => [:foo, :bar, :baz, :bat]
  end
end

Execution of the transaction is deferred until the block exits, and is executed atomically (all-or-nothing): either all transaction calls are executed, or none are. This maintains the integrity of the store.

Other code in the block (including even calls to path and PStore.new) is executed immediately, not deferred.

The transaction block:

As seen above, changes in a transaction are made automatically when the block exits. The block may be exited early by calling method commit or abort.

Read-Only Transactions

By default, a transaction allows both reading from and writing to the store:

store.transaction do
  # Read-write transaction.
  # Any code except a call to #transaction is allowed here.
end

If argument read_only is passed as true, only reading is allowed:

store.transaction(true) do
  # Read-only transaction:
  # Calls to #transaction, #[]=, and #delete are not allowed here.
end

Hierarchical Values

The value for an entry may be a simple object (as seen above). It may also be a hierarchy of objects nested to any depth:

deep_store = PStore.new('deep.store')
deep_store.transaction do
  array_of_hashes = [{}, {}, {}]
  deep_store[:array_of_hashes] = array_of_hashes
  deep_store[:array_of_hashes] # => [{}, {}, {}]
  hash_of_arrays = {foo: [], bar: [], baz: []}
  deep_store[:hash_of_arrays] = hash_of_arrays
  deep_store[:hash_of_arrays]  # => {:foo=>[], :bar=>[], :baz=>[]}
  deep_store[:hash_of_arrays][:foo].push(:bat)
  deep_store[:hash_of_arrays]  # => {:foo=>[:bat], :bar=>[], :baz=>[]}
end

And recall that you can use dig methods in a returned hierarchy of objects.

Working with the Store

Creating a Store

Use method PStore.new to create a store. The new store creates or opens its containing file:

store = PStore.new('t.store')

Modifying the Store

Use method []= to update or create an entry:

example_store do |store|
  store.transaction do
    store[:foo] = 1 # Update.
    store[:bam] = 1 # Create.
  end
end

Use method delete to remove an entry:

example_store do |store|
  store.transaction do
    store.delete(:foo)
    store[:foo] # => nil
  end
end

Retrieving Values

Use method fetch (allows default) or [] (defaults to nil) to retrieve an entry:

example_store do |store|
  store.transaction do
    store[:foo]             # => 0
    store[:nope]            # => nil
    store.fetch(:baz)       # => 2
    store.fetch(:nope, nil) # => nil
    store.fetch(:nope)      # Raises exception.
  end
end

Querying the Store

Use method key? to determine whether a given key exists:

example_store do |store|
  store.transaction do
    store.key?(:foo) # => true
  end
end

Use method keys to retrieve keys:

example_store do |store|
  store.transaction do
    store.keys # => [:foo, :bar, :baz]
  end
end

Use method path to retrieve the path to the store’s underlying file; this method may be called from outside a transaction block:

store = PStore.new('t.store')
store.path # => "t.store"

Transaction Safety

For transaction safety, see:

Needless to say, if you’re storing valuable data with PStore, then you should backup the PStore file from time to time.

An Example Store

require "pstore"

# A mock wiki object.
class WikiPage

  attr_reader :page_name

  def initialize(page_name, author, contents)
    @page_name = page_name
    @revisions = Array.new
    add_revision(author, contents)
  end

  def add_revision(author, contents)
    @revisions << {created: Time.now,
                   author: author,
                   contents: contents}
  end

  def wiki_page_references
    [@page_name] + @revisions.last[:contents].scan(/\b(?:[A-Z]+[a-z]+){2,}/)
  end

end

# Create a new wiki page.
home_page = WikiPage.new("HomePage", "James Edward Gray II",
                         "A page about the JoysOfDocumentation..." )

wiki = PStore.new("wiki_pages.pstore")
# Update page data and the index together, or not at all.
wiki.transaction do
  # Store page.
  wiki[home_page.page_name] = home_page
  # Create page index.
  wiki[:wiki_index] ||= Array.new
  # Update wiki index.
  wiki[:wiki_index].push(*home_page.wiki_page_references)
end

# Read wiki data, setting argument read_only to true.
wiki.transaction(true) do
  wiki.keys.each do |key|
    puts key
    puts wiki[key]
  end
end

Ractor is an Actor-model abstraction for Ruby that provides thread-safe parallel execution.

Ractor.new makes a new Ractor, which can run in parallel.

# The simplest ractor
r = Ractor.new {puts "I am in Ractor!"}
r.take # wait for it to finish
# Here, "I am in Ractor!" is printed

Ractors do not share all objects with each other. There are two main benefits to this: across ractors, thread-safety concerns such as data-races and race-conditions are not possible. The other benefit is parallelism.

To achieve this, object sharing is limited across ractors. For example, unlike in threads, ractors can’t access all the objects available in other ractors. Even objects normally available through variables in the outer scope are prohibited from being used across ractors.

a = 1
r = Ractor.new {puts "I am in Ractor! a=#{a}"}
# fails immediately with
# ArgumentError (can not isolate a Proc because it accesses outer variables (a).)

The object must be explicitly shared:

a = 1
r = Ractor.new(a) { |a1| puts "I am in Ractor! a=#{a1}"}

On CRuby (the default implementation), Global Virtual Machine Lock (GVL) is held per ractor, so ractors can perform in parallel without locking each other. This is unlike the situation with threads on CRuby.

Instead of accessing shared state, objects should be passed to and from ractors by sending and receiving them as messages.

a = 1
r = Ractor.new do
  a_in_ractor = receive # receive blocks until somebody passes a message
  puts "I am in Ractor! a=#{a_in_ractor}"
end
r.send(a)  # pass it
r.take
# Here, "I am in Ractor! a=1" is printed

There are two pairs of methods for sending/receiving messages:

In addition to that, any arguments passed to Ractor.new are passed to the block and available there as if received by Ractor.receive, and the last block value is sent outside of the ractor as if sent by Ractor.yield.

A little demonstration of a classic ping-pong:

server = Ractor.new(name: "server") do
  puts "Server starts: #{self.inspect}"
  puts "Server sends: ping"
  Ractor.yield 'ping'                       # The server doesn't know the receiver and sends to whoever interested
  received = Ractor.receive                 # The server doesn't know the sender and receives from whoever sent
  puts "Server received: #{received}"
end

client = Ractor.new(server) do |srv|        # The server is sent to the client, and available as srv
  puts "Client starts: #{self.inspect}"
  received = srv.take                       # The client takes a message from the server
  puts "Client received from " \
       "#{srv.inspect}: #{received}"
  puts "Client sends to " \
       "#{srv.inspect}: pong"
  srv.send 'pong'                           # The client sends a message to the server
end

[client, server].each(&:take)               # Wait until they both finish

This will output something like:

Server starts: #<Ractor:#2 server test.rb:1 running>
Server sends: ping
Client starts: #<Ractor:#3 test.rb:8 running>
Client received from #<Ractor:#2 server test.rb:1 blocking>: ping
Client sends to #<Ractor:#2 server test.rb:1 blocking>: pong
Server received: pong

Ractors receive their messages via the incoming port, and send them to the outgoing port. Either one can be disabled with Ractor#close_incoming and Ractor#close_outgoing, respectively. When a ractor terminates, its ports are closed automatically.

Shareable and unshareable objects

When an object is sent to and from a ractor, it’s important to understand whether the object is shareable or unshareable. Most Ruby objects are unshareable objects. Even frozen objects can be unshareable if they contain (through their instance variables) unfrozen objects.

Shareable objects are those which can be used by several threads without compromising thread-safety, for example numbers, true and false. Ractor.shareable? allows you to check this, and Ractor.make_shareable tries to make the object shareable if it’s not already, and gives an error if it can’t do it.

Ractor.shareable?(1)            #=> true -- numbers and other immutable basic values are shareable
Ractor.shareable?('foo')        #=> false, unless the string is frozen due to # frozen_string_literal: true
Ractor.shareable?('foo'.freeze) #=> true
Ractor.shareable?([Object.new].freeze) #=> false, inner object is unfrozen

ary = ['hello', 'world']
ary.frozen?                 #=> false
ary[0].frozen?              #=> false
Ractor.make_shareable(ary)
ary.frozen?                 #=> true
ary[0].frozen?              #=> true
ary[1].frozen?              #=> true

When a shareable object is sent (via send or Ractor.yield), no additional processing occurs on it. It just becomes usable by both ractors. When an unshareable object is sent, it can be either copied or moved. The first is the default, and it copies the object fully by deep cloning (Object#clone) the non-shareable parts of its structure.

data = ['foo', 'bar'.freeze]
r = Ractor.new do
  data2 = Ractor.receive
  puts "In ractor: #{data2.object_id}, #{data2[0].object_id}, #{data2[1].object_id}"
end
r.send(data)
r.take
puts "Outside  : #{data.object_id}, #{data[0].object_id}, #{data[1].object_id}"

This will output something like:

In ractor: 340, 360, 320
Outside  : 380, 400, 320

Note that the object ids of the array and the non-frozen string inside the array have changed in the ractor because they are different objects. The second array’s element, which is a shareable frozen string, is the same object.

Deep cloning of objects may be slow, and sometimes impossible. Alternatively, move: true may be used during sending. This will move the unshareable object to the receiving ractor, making it inaccessible to the sending ractor.

data = ['foo', 'bar']
r = Ractor.new do
  data_in_ractor = Ractor.receive
  puts "In ractor: #{data_in_ractor.object_id}, #{data_in_ractor[0].object_id}"
end
r.send(data, move: true)
r.take
puts "Outside: moved? #{Ractor::MovedObject === data}"
puts "Outside: #{data.inspect}"

This will output:

In ractor: 100, 120
Outside: moved? true
test.rb:9:in `method_missing': can not send any methods to a moved object (Ractor::MovedError)

Notice that even inspect (and more basic methods like __id__) is inaccessible on a moved object.

Class and Module objects are shareable so the class/module definitions are shared between ractors. Ractor objects are also shareable. All operations on shareable objects are thread-safe, so the thread-safety property will be kept. We can not define mutable shareable objects in Ruby, but C extensions can introduce them.

It is prohibited to access (get) instance variables of shareable objects in other ractors if the values of the variables aren’t shareable. This can occur because modules/classes are shareable, but they can have instance variables whose values are not. In non-main ractors, it’s also prohibited to set instance variables on classes/modules (even if the value is shareable).

class C
  class << self
    attr_accessor :tricky
  end
end

C.tricky = "unshareable".dup

r = Ractor.new(C) do |cls|
  puts "I see #{cls}"
  puts "I can't see #{cls.tricky}"
  cls.tricky = true # doesn't get here, but this would also raise an error
end
r.take
# I see C
# can not access instance variables of classes/modules from non-main Ractors (RuntimeError)

Ractors can access constants if they are shareable. The main Ractor is the only one that can access non-shareable constants.

GOOD = 'good'.freeze
BAD = 'bad'.dup

r = Ractor.new do
  puts "GOOD=#{GOOD}"
  puts "BAD=#{BAD}"
end
r.take
# GOOD=good
# can not access non-shareable objects in constant Object::BAD by non-main Ractor. (NameError)

# Consider the same C class from above

r = Ractor.new do
  puts "I see #{C}"
  puts "I can't see #{C.tricky}"
end
r.take
# I see C
# can not access instance variables of classes/modules from non-main Ractors (RuntimeError)

See also the description of shareable_constant_value pragma in Comments syntax explanation.

Ractors vs threads

Each ractor has its own main Thread. New threads can be created from inside ractors (and, on CRuby, they share the GVL with other threads of this ractor).

r = Ractor.new do
  a = 1
  Thread.new {puts "Thread in ractor: a=#{a}"}.join
end
r.take
# Here "Thread in ractor: a=1" will be printed

Note on code examples

In the examples below, sometimes we use the following method to wait for ractors that are not currently blocked to finish (or to make progress).

def wait
  sleep(0.1)
end

It is **only for demonstration purposes** and shouldn’t be used in a real code. Most of the time, take is used to wait for ractors to finish.

Reference

See Ractor design doc for more details.

In concurrent programming, a monitor is an object or module intended to be used safely by more than one thread. The defining characteristic of a monitor is that its methods are executed with mutual exclusion. That is, at each point in time, at most one thread may be executing any of its methods. This mutual exclusion greatly simplifies reasoning about the implementation of monitors compared to reasoning about parallel code that updates a data structure.

You can read more about the general principles on the Wikipedia page for Monitors.

Examples

Simple object.extend

require 'monitor.rb'

buf = []
buf.extend(MonitorMixin)
empty_cond = buf.new_cond

# consumer
Thread.start do
  loop do
    buf.synchronize do
      empty_cond.wait_while { buf.empty? }
      print buf.shift
    end
  end
end

# producer
while line = ARGF.gets
  buf.synchronize do
    buf.push(line)
    empty_cond.signal
  end
end

The consumer thread waits for the producer thread to push a line to buf while buf.empty?. The producer thread (main thread) reads a line from ARGF and pushes it into buf then calls empty_cond.signal to notify the consumer thread of new data.

Simple Class include

require 'monitor'

class SynchronizedArray < Array

  include MonitorMixin

  def initialize(*args)
    super(*args)
  end

  alias :old_shift :shift
  alias :old_unshift :unshift

  def shift(n=1)
    self.synchronize do
      self.old_shift(n)
    end
  end

  def unshift(item)
    self.synchronize do
      self.old_unshift(item)
    end
  end

  # other methods ...
end

SynchronizedArray implements an Array with synchronized access to items. This Class is implemented as subclass of Array which includes the MonitorMixin module.

The Singleton module implements the Singleton pattern.

Usage

To use Singleton, include the module in your class.

class Klass
   include Singleton
   # ...
end

This ensures that only one instance of Klass can be created.

a,b = Klass.instance, Klass.instance

a == b
# => true

Klass.new
# => NoMethodError - new is private ...

The instance is created at upon the first call of Klass.instance().

class OtherKlass
  include Singleton
  # ...
end

ObjectSpace.each_object(OtherKlass){}
# => 0

OtherKlass.instance
ObjectSpace.each_object(OtherKlass){}
# => 1

This behavior is preserved under inheritance and cloning.

Implementation

This above is achieved by:

Singleton and Marshal

By default Singleton’s _dump(depth) returns the empty string. Marshalling by default will strip state information, e.g. instance variables from the instance. Classes using Singleton can provide custom _load(str) and _dump(depth) methods to retain some of the previous state of the instance.

require 'singleton'

class Example
  include Singleton
  attr_accessor :keep, :strip
  def _dump(depth)
    # this strips the @strip information from the instance
    Marshal.dump(@keep, depth)
  end

  def self._load(str)
    instance.keep = Marshal.load(str)
    instance
  end
end

a = Example.instance
a.keep = "keep this"
a.strip = "get rid of this"

stored_state = Marshal.dump(a)

a.keep = nil
a.strip = nil
b = Marshal.load(stored_state)
p a == b  #  => true
p a.keep  #  => "keep this"
p a.strip #  => nil

Producer

Enumerator::Product generates a Cartesian product of any number of enumerable objects. Iterating over the product of enumerable objects is roughly equivalent to nested each_entry loops where the loop for the rightmost object is put innermost.

innings = Enumerator::Product.new(1..9, ['top', 'bottom'])

innings.each do |i, h|
  p [i, h]
end
# [1, "top"]
# [1, "bottom"]
# [2, "top"]
# [2, "bottom"]
# [3, "top"]
# [3, "bottom"]
# ...
# [9, "top"]
# [9, "bottom"]

The method used against each enumerable object is ‘each_entry` instead of `each` so that the product of N enumerable objects yields an array of exactly N elements in each iteration.

When no enumerator is given, it calls a given block once yielding an empty argument list.

This type of objects can be created by Enumerator.product.

OLEProperty is a helper class of Property with arguments, used by olegen.rb-generated files.

exception to wait for reading by EINPROGRESS. see IO.select.

exception to wait for writing by EINPROGRESS. see IO.select.

Raised when the address is an invalid length.

No documentation available

Response class for Use Proxy responses (status code 305).

The requested resource is available only through a proxy, whose address is provided in the response.

References:

Response class for Proxy Authentication Required responses (status code 407).

The client must first authenticate itself with the proxy.

References:

Represents a block local variable.

a { |; b| }
       ^

The top level node of any parse tree.

Search took: 4ms  ·  Total Results: 2407