Closes the outgoing port and returns its previous state. All further attempts to Ractor.yield
in the ractor, and take
from the ractor will fail with Ractor::ClosedError
.
r = Ractor.new {sleep(500)} r.close_outgoing #=> false r.close_outgoing #=> true r.take # Ractor::ClosedError (The outgoing-port is already closed)
Returns the status of the global “ignore deadlock” condition. The default is false
, so that deadlock conditions are not ignored.
See also ::ignore_deadlock=
.
Returns the new state. When set to true
, the VM will not check for deadlock conditions. It is only useful to set this if your application can break a deadlock condition via some other means, such as a signal.
Thread.ignore_deadlock = true queue = Thread::Queue.new trap(:SIGUSR1){queue.push "Received signal"} # raises fatal error unless ignoring deadlock puts queue.pop
See also ::ignore_deadlock
.
Changes asynchronous interrupt timing.
interrupt means asynchronous event and corresponding procedure by Thread#raise
, Thread#kill
, signal trap (not supported yet) and main thread termination (if main thread terminates, then all other thread will be killed).
The given hash
has pairs like ExceptionClass => :TimingSymbol
. Where the ExceptionClass is the interrupt handled by the given block. The TimingSymbol can be one of the following symbols:
:immediate
Invoke interrupts immediately.
:on_blocking
Invoke interrupts while BlockingOperation.
:never
Never invoke all interrupts.
BlockingOperation means that the operation will block the calling thread, such as read and write. On CRuby implementation, BlockingOperation is any operation executed without GVL.
Masked asynchronous interrupts are delayed until they are enabled. This method is similar to sigprocmask(3).
Asynchronous interrupts are difficult to use.
If you need to communicate between threads, please consider to use another way such as Queue
.
Or use them with deep understanding about this method.
In this example, we can guard from Thread#raise
exceptions.
Using the :never
TimingSymbol the RuntimeError
exception will always be ignored in the first block of the main thread. In the second ::handle_interrupt
block we can purposefully handle RuntimeError
exceptions.
th = Thread.new do Thread.handle_interrupt(RuntimeError => :never) { begin # You can write resource allocation code safely. Thread.handle_interrupt(RuntimeError => :immediate) { # ... } ensure # You can write resource deallocation code safely. end } end Thread.pass # ... th.raise "stop"
While we are ignoring the RuntimeError
exception, it’s safe to write our resource allocation code. Then, the ensure block is where we can safely deallocate your resources.
Timeout::Error
In the next example, we will guard from the Timeout::Error
exception. This will help prevent from leaking resources when Timeout::Error
exceptions occur during normal ensure clause. For this example we use the help of the standard library Timeout
, from lib/timeout.rb
require 'timeout' Thread.handle_interrupt(Timeout::Error => :never) { timeout(10){ # Timeout::Error doesn't occur here Thread.handle_interrupt(Timeout::Error => :on_blocking) { # possible to be killed by Timeout::Error # while blocking operation } # Timeout::Error doesn't occur here } }
In the first part of the timeout
block, we can rely on Timeout::Error
being ignored. Then in the Timeout::Error => :on_blocking
block, any operation that will block the calling thread is susceptible to a Timeout::Error
exception being raised.
It’s possible to stack multiple levels of ::handle_interrupt
blocks in order to control more than one ExceptionClass and TimingSymbol at a time.
Thread.handle_interrupt(FooError => :never) { Thread.handle_interrupt(BarError => :never) { # FooError and BarError are prohibited. } }
All exceptions inherited from the ExceptionClass parameter will be considered.
Thread.handle_interrupt(Exception => :never) { # all exceptions inherited from Exception are prohibited. }
For handling all interrupts, use Object
and not Exception
as the ExceptionClass, as kill/terminate interrupts are not handled by Exception
.
Returns whether or not the asynchronous queue is empty.
Since Thread::handle_interrupt
can be used to defer asynchronous events, this method can be used to determine if there are any deferred events.
If you find this method returns true, then you may finish :never
blocks.
For example, the following method processes deferred asynchronous events immediately.
def Thread.kick_interrupt_immediately Thread.handle_interrupt(Object => :immediate) { Thread.pass } end
If error
is given, then check only for error
type deferred events.
th = Thread.new{ Thread.handle_interrupt(RuntimeError => :on_blocking){ while true ... # reach safe point to invoke interrupt if Thread.pending_interrupt? Thread.handle_interrupt(Object => :immediate){} end ... end } } ... th.raise # stop thread
This example can also be written as the following, which you should use to avoid asynchronous interrupts.
flag = true th = Thread.new{ Thread.handle_interrupt(RuntimeError => :on_blocking){ while true ... # reach safe point to invoke interrupt break if flag == false ... end } } ... flag = false # stop thread
Returns whether or not the asynchronous queue is empty for the target thread.
If error
is given, then check only for error
type deferred events.
See ::pending_interrupt?
for more information.
Returns the execution stack for the target thread—an array containing backtrace location objects.
See Thread::Backtrace::Location
for more information.
This method behaves similarly to Kernel#caller_locations
except it applies to a specific thread.
In general, while a TracePoint
callback is running, other registered callbacks are not called to avoid confusion by reentrance. This method allows the reentrance in a given block. This method should be used carefully, otherwise the callback can be easily called infinitely.
If this method is called when the reentrance is already allowed, it raises a RuntimeError
.
Returns an array of the names of global variables. This includes special regexp global variables such as $~
and $+
, but does not include the numbered regexp global variables ($1
, $2
, etc.).
global_variables.grep /std/ #=> [:$stdin, :$stdout, :$stderr]
Returns the names of the current local variables.
fred = 1 for i in 1..10 # ... end local_variables #=> [:fred, :i]
Returns true
if yield
would execute a block in the current context. The iterator?
form is mildly deprecated.
def try if block_given? yield else "no block" end end try #=> "no block" try { "hello" } #=> "hello" try do "hello" end #=> "hello"
Returns an array containing truthy elements returned by the block.
With a block given, calls the block with successive elements; returns an array containing each truthy value returned by the block:
(0..9).filter_map {|i| i * 2 if i.even? } # => [0, 4, 8, 12, 16] {foo: 0, bar: 1, baz: 2}.filter_map {|key, value| key if value.even? } # => [:foo, :baz]
When no block given, returns an Enumerator.
With a block given, calls the block with each element, but in reverse order; returns self
:
a = [] (1..4).reverse_each {|element| a.push(-element) } # => 1..4 a # => [-4, -3, -2, -1] a = [] %w[a b c d].reverse_each {|element| a.push(element) } # => ["a", "b", "c", "d"] a # => ["d", "c", "b", "a"] a = [] h.reverse_each {|element| a.push(element) } # => {:foo=>0, :bar=>1, :baz=>2} a # => [[:baz, 2], [:bar, 1], [:foo, 0]]
With no block given, returns an Enumerator
.
With argument pattern
, returns an enumerator that uses the pattern to partition elements into arrays (“slices”). An element ends the current slice if element === pattern
:
a = %w[foo bar fop for baz fob fog bam foy] e = a.slice_after(/ba/) # => #<Enumerator: ...> e.each {|array| p array }
Output:
["foo", "bar"] ["fop", "for", "baz"] ["fob", "fog", "bam"] ["foy"]
With a block, returns an enumerator that uses the block to partition elements into arrays. An element ends the current slice if its block return is a truthy value:
e = (1..20).slice_after {|i| i % 4 == 2 } # => #<Enumerator: ...> e.each {|array| p array }
Output:
[1, 2] [3, 4, 5, 6] [7, 8, 9, 10] [11, 12, 13, 14] [15, 16, 17, 18] [19, 20]
Other methods of the Enumerator
class and Enumerable
module, such as map
, etc., are also usable.
For example, continuation lines (lines end with backslash) can be concatenated as follows:
lines = ["foo\n", "bar\\\n", "baz\n", "\n", "qux\n"] e = lines.slice_after(/(?<!\\)\n\z/) p e.to_a #=> [["foo\n"], ["bar\\\n", "baz\n"], ["\n"], ["qux\n"]] p e.map {|ll| ll[0...-1].map {|l| l.sub(/\\\n\z/, "") }.join + ll.last } #=>["foo\n", "barbaz\n", "\n", "qux\n"]
Returns the last Error
of the current executing Thread
or nil if none
Sets the last Error
of the current executing Thread
to error
Enters exclusive section.
Returns true if this monitor is locked by any thread
Returns the source file origin from the given object
.
See ::trace_object_allocations
for more information and examples.
Returns the original line from source for from the given object
.
See ::trace_object_allocations
for more information and examples.
Adds aProc as a finalizer, to be called after obj was destroyed. The object ID of the obj will be passed as an argument to aProc. If aProc is a lambda or method, make sure it can be called with a single argument.
The return value is an array [0, aProc]
.
The two recommended patterns are to either create the finaliser proc in a non-instance method where it can safely capture the needed state, or to use a custom callable object that stores the needed state explicitly as instance variables.
class Foo def initialize(data_needed_for_finalization) ObjectSpace.define_finalizer(self, self.class.create_finalizer(data_needed_for_finalization)) end def self.create_finalizer(data_needed_for_finalization) proc { puts "finalizing #{data_needed_for_finalization}" } end end class Bar class Remover def initialize(data_needed_for_finalization) @data_needed_for_finalization = data_needed_for_finalization end def call(id) puts "finalizing #{@data_needed_for_finalization}" end end def initialize(data_needed_for_finalization) ObjectSpace.define_finalizer(self, Remover.new(data_needed_for_finalization)) end end
Note that if your finalizer references the object to be finalized it will never be run on GC
, although it will still be run at exit. You will get a warning if you capture the object to be finalized as the receiver of the finalizer.
class CapturesSelf def initialize(name) ObjectSpace.define_finalizer(self, proc { # this finalizer will only be run on exit puts "finalizing #{name}" }) end end
Also note that finalization can be unpredictable and is never guaranteed to be run except on exit.
Removes all finalizers for obj.