Threading in C#
Joseph Albahari
Last updated: 2011-4-27
Translations:
Chinese
| Czech
| Persian
| Russian
| Japanese
Download PDF
Part 3: Using Threads
The event-based asynchronous pattern (EAP) provides a
simple means by which classes can offer multithreading capability without consumers
needing to explicitly start or manage threads. It also provides the following
features:
The EAP is just a pattern, so these features must be
written by the implementer. Just a few classes in the Framework follow this
pattern, most notably BackgroundWorker
(which we’ll cover next), and WebClient
in System.Net
. Essentially the pattern is this: a class
offers a family of members that internally manage multithreading, similar to
the following (the highlighted sections indicate code that is part of the pattern):
// These members are from the WebClient class:
public byte[] DownloadData (Uri address); // Synchronous version
public void DownloadDataAsync (Uri address);
public void DownloadDataAsync (Uri address, object userToken);
public event DownloadDataCompletedEventHandler DownloadDataCompleted;
public void CancelAsync (object userState); // Cancels an operation
public bool IsBusy { get; } // Indicates if still running
The *Async
methods execute
asynchronously: in other words, they start an operation on another thread and
then return immediately to the caller. When the operation completes, the *Completed
event fires — automatically
calling Invoke
if required by a WPF or Windows Forms
application. This event passes back an event arguments object that contains:
- A flag indicating whether the operation was canceled (by the
consumer calling
CancelAsync
)
- An
Error
object indicating an
exception that was thrown (if any)
- The
userToken
object if supplied when
calling the Async
method
Here’s how we can use WebClient
’s
EAP members to download a web page:
var wc = new WebClient();
wc.DownloadStringCompleted += (sender, args) =>
{
if (args.Cancelled)
Console.WriteLine ("Canceled");
else if (args.Error != null)
Console.WriteLine ("Exception: " + args.Error.Message);
else
{
Console.WriteLine (args.Result.Length + " chars were downloaded");
// We could update the UI from here...
}
};
wc.DownloadStringAsync (new Uri ("http://www.linqpad.net")); // Start it
A class following the EAP may offer additional groups of asynchronous
methods. For instance:
public string DownloadString (Uri address);
public void DownloadStringAsync (Uri address);
public void DownloadStringAsync (Uri address, object userToken);
public event DownloadStringCompletedEventHandler DownloadStringCompleted;
However, these will share the same CancelAsync
and IsBusy
members. Therefore, only one asynchronous
operation can happen at once.
The EAP offers the possibility of economizing on
threads, if its internal implementation follows the APM (this is described in
Chapter 23 of C# 4.0 in a Nutshell).
We’ll see in Part 5 how Tasks
offer similar capabilities — including exception forwarding, continuations, cancellation
tokens, and support for synchronization contexts. This makes implementing
the EAP less attractive — except in simple cases where BackgroundWorker
will do.
Write LINQ queries in a fraction of the time
LINQPad
Small. Fast. FREE
BackgroundWorker
is a helper
class in the System.ComponentModel
namespace for
managing a worker thread. It can be considered a general-purpose implementation
of the EAP, and provides the
following features:
- A cooperative cancellation model
- The ability to safely update
WPF or Windows Forms controls when the worker completes
- Forwarding of exceptions to the completion event
- A protocol for reporting progress
- An implementation of
IComponent
allowing it to be sited in Visual Studio’s designer
BackgroundWorker
uses the thread pool, which means you should never call Abort
on a BackgroundWorker
thread.
Here are the minimum steps in using BackgroundWorker
:
- Instantiate
BackgroundWorker
and handle the DoWork
event.
- Call
RunWorkerAsync
, optionally with an object
argument.
This then sets it in motion. Any argument passed to RunWorkerAsync
will be forwarded to DoWork
’s
event handler, via the event argument’s Argument
property. Here’s an example:
class Program
{
static BackgroundWorker _bw = new BackgroundWorker();
static void Main()
{
_bw.DoWork += bw_DoWork;
_bw.RunWorkerAsync ("Message to worker");
Console.ReadLine();
}
static void bw_DoWork (object sender, DoWorkEventArgs e)
{
// This is called on the worker thread
Console.WriteLine (e.Argument); // writes "Message to worker"
// Perform time-consuming task...
}
}
BackgroundWorker
has a RunWorkerCompleted
event that fires after the DoWork
event handler has done its job. Handling RunWorkerCompleted
is not mandatory, but you usually do so
in order to query any exception that was thrown in DoWork
.
Further, code within a RunWorkerCompleted
event
handler is able to update user interface controls without explicit marshaling;
code within the DoWork
event handler cannot.
To add support for progress reporting:
- Set the
WorkerReportsProgress
property to true
.
- Periodically call
ReportProgress
from within
the DoWork
event handler with a “percentage
complete” value, and optionally, a user-state object.
- Handle the
ProgressChanged
event, querying its
event argument’s ProgressPercentage
property.
- Code in the
ProgressChanged
event handler is
free to interact with UI controls just as with RunWorkerCompleted
.
This is typically where you will update a progress bar.
To add support for cancellation:
- Set the
WorkerSupportsCancellation
property
to true
.
- Periodically check the
CancellationPending
property from within the DoWork
event handler. If
it’s true
, set the event argument’s Cancel
property to true
, and
return. (The worker can also set Cancel
and exit without
CancellationPending
being true
if it decides that the job is too difficult and it can’t go on.)
- Call
CancelAsync
to request cancellation.
Here’s an example that implements all the preceding
features:
using System;
using System.Threading;
using System.ComponentModel;
class Program
{
static BackgroundWorker _bw;
static void Main()
{
_bw = new BackgroundWorker
{
WorkerReportsProgress = true,
WorkerSupportsCancellation = true
};
_bw.DoWork += bw_DoWork;
_bw.ProgressChanged += bw_ProgressChanged;
_bw.RunWorkerCompleted += bw_RunWorkerCompleted;
_bw.RunWorkerAsync ("Hello to worker");
Console.WriteLine ("Press Enter in the next 5 seconds to cancel");
Console.ReadLine();
if (_bw.IsBusy) _bw.CancelAsync();
Console.ReadLine();
}
static void bw_DoWork (object sender, DoWorkEventArgs e)
{
for (int i = 0; i <= 100; i += 20)
{
if (_bw.CancellationPending) { e.Cancel = true; return; }
_bw.ReportProgress (i);
Thread.Sleep (1000); // Just for the demo... don't go sleeping
} // for real in pooled threads!
e.Result = 123; // This gets passed to RunWorkerCompleted
}
static void bw_RunWorkerCompleted (object sender,
RunWorkerCompletedEventArgs e)
{
if (e.Cancelled)
Console.WriteLine ("You canceled!");
else if (e.Error != null)
Console.WriteLine ("Worker exception: " + e.Error.ToString());
else
Console.WriteLine ("Complete: " + e.Result); // from DoWork
}
static void bw_ProgressChanged (object sender,
ProgressChangedEventArgs e)
{
Console.WriteLine ("Reached " + e.ProgressPercentage + "%");
}
}
Press Enter in the next 5 seconds to cancel
Reached 0%
Reached 20%
Reached 40%
Reached 60%
Reached 80%
Reached 100%
Complete: 123
Press Enter in the next 5 seconds to cancel
Reached 0%
Reached 20%
Reached 40%
You canceled!
Subclassing BackgroundWorker
is an
easy way to implement the EAP, in
cases when you need to offer only one asynchronously executing method.
BackgroundWorker
is not sealed
and provides a virtual OnDoWork
method, suggesting
another pattern for its use. In writing a potentially long-running method, you
could write an additional version returning a subclassed BackgroundWorker
,
preconfigured to perform the job concurrently. The consumer then needs to
handle only the RunWorkerCompleted
and ProgressChanged
events. For instance, suppose we wrote a
time-consuming method called GetFinancialTotals
:
public class Client
{
Dictionary <string,int> GetFinancialTotals (int foo, int bar) { ... }
...
}
We could refactor it as follows:
public class Client
{
public FinancialWorker GetFinancialTotalsBackground (int foo, int bar)
{
return new FinancialWorker (foo, bar);
}
}
public class FinancialWorker : BackgroundWorker
{
public Dictionary <string,int> Result; // You can add typed fields.
public readonly int Foo, Bar;
public FinancialWorker()
{
WorkerReportsProgress = true;
WorkerSupportsCancellation = true;
}
public FinancialWorker (int foo, int bar) : this()
{
this.Foo = foo; this.Bar = bar;
}
protected override void OnDoWork (DoWorkEventArgs e)
{
ReportProgress (0, "Working hard on this report...");
// Initialize financial report data
// ...
while (!<finished report>)
{
if (CancellationPending) { e.Cancel = true; return; }
// Perform another calculation step ...
// ...
ReportProgress (percentCompleteCalc, "Getting there...");
}
ReportProgress (100, "Done!");
e.Result = Result = <completed report data>;
}
}
Whoever calls GetFinancialTotalsBackground
then gets a FinancialWorker
: a wrapper to manage the
background operation with real-world usability. It can report progress, can be
canceled, is friendly with WPF and Windows Forms applications, and handles
exceptions well.
All blocking methods (such as Sleep
, Join
, EndInvoke
, and Wait
) block
forever if the unblocking condition is never met and no timeout is specified.
Occasionally, it can be useful to release a blocked thread prematurely; for
instance, when ending an application. Two methods accomplish this:
-
Thread.Interrupt
-
Thread.Abort
The Abort
method is also
capable of ending a nonblocked thread — stuck, perhaps, in an infinite loop. Abort
is occasionally useful in niche scenarios; Interrupt
is almost never needed.
Interrupt
and Abort
can cause considerable trouble: it’s precisely
because they seem like obvious choices in solving a range of problems
that it’s worth examining their pitfalls.
Calling Interrupt
on a blocked thread forcibly releases it, throwing a ThreadInterruptedException
, as follows:
static void Main()
{
Thread t = new Thread (delegate()
{
try { Thread.Sleep (Timeout.Infinite); }
catch (ThreadInterruptedException) { Console.Write ("Forcibly "); }
Console.WriteLine ("Woken!");
});
t.Start();
t.Interrupt();
}
Forcibly Woken!
Interrupting a thread does not cause the thread to end,
unless the ThreadInterruptedException
is unhandled.
If Interrupt
is called on a
thread that’s not blocked, the thread continues executing until it next blocks,
at which point a ThreadInterruptedException
is
thrown. This avoids the need for the following test:
if ((worker.ThreadState & ThreadState.WaitSleepJoin) > 0)
worker.Interrupt();
which is not thread-safe because of the possibility of
preemption between the if
statement and worker.Interrupt
.
Interrupting a thread arbitrarily is dangerous, however,
because any framework or third-party methods in the calling stack could
unexpectedly receive the interrupt rather than your intended code. All it would
take is for the thread to block briefly on a simple lock
or synchronization resource, and any pending interruption would kick in. If the
method isn’t designed to be interrupted (with appropriate cleanup code in finally
blocks), objects could be left in an unusable
state or resources incompletely released.
Moreover, Interrupt
is
unnecessary: if you are writing the code that blocks, you can achieve the same
result more safely with a signaling construct — or Framework 4.0’s cancellation tokens. And if you want to
“unblock” someone else’s code, Abort
is nearly
always more useful.
A blocked thread can also be
forcibly released via its Abort
method. This has an
effect similar to calling Interrupt
, except that a ThreadAbortException
is thrown instead of a ThreadInterruptedException
. Furthermore, the exception
will be rethrown at the end of the catch
block (in
an attempt to terminate the thread for good) unless Thread.ResetAbort
is called within the catch
block. In the interim,
the thread has a ThreadState
of AbortRequested
.
An unhandled ThreadAbortException
is one of only two types of exception that does not cause application shutdown (the other is AppDomainUnloadException
).
The big difference between Interrupt
and Abort
is what happens when it’s called on a
thread that is not blocked. Whereas Interrupt
waits
until the thread next blocks before doing anything, Abort
throws an exception on the thread right where it’s executing (unmanaged code
excepted). This is a problem because .NET Framework code might be aborted — code
that is not abort-safe. For example, if an abort occurs while a FileStream
is being constructed, it’s possible that an
unmanaged file handle will remain open until the application domain ends. This
rules out using Abort
in almost any nontrivial
context.
For more detail on why Abort is unsafe, see Aborting Threads in Part 4.
There are two cases, though, where you can safely use Abort
. One is if you are willing to tear down a thread’s
application domain after it is aborted. A good example of when you might do
this is in writing a unit-testing framework. Another case where you can call Abort
safely is on your own thread (because you know
exactly where you are). Aborting your own thread throws an “unswallowable”
exception: one that gets rethrown after each catch block. ASP.NET does exactly
this when you call Redirect
.
LINQPad aborts threads when you cancel a runaway query. After
aborting, it dismantles and re-creates the query’s application domain to avoid
the potentially polluted state that could otherwise occur.
As we saw in the preceding section, calling Abort
on a thread is dangerous in most scenarios. The
alternative, then, is to implement a cooperative pattern whereby the
worker periodically checks a flag that indicates whether it should abort (like
in BackgroundWorker
).
To cancel, the instigator simply sets the flag, and then waits for the worker
to comply. This BackgroundWorker
helper class
implements such a flag-based cancellation pattern, and you easily implement one
yourself.
The obvious disadvantage is that the worker method must be
written explicitly to support cancellation. Nonetheless, this is one of the few
safe cancellation patterns. To illustrate this pattern, we’ll first write a class to encapsulate the cancellation flag:
class RulyCanceler
{
object _cancelLocker = new object();
bool _cancelRequest;
public bool IsCancellationRequested
{
get { lock (_cancelLocker) return _cancelRequest; }
}
public void Cancel() { lock (_cancelLocker) _cancelRequest = true; }
public void ThrowIfCancellationRequested()
{
if (IsCancellationRequested) throw new OperationCanceledException();
}
}
OperationCanceledException
is a
Framework type intended for just this purpose. Any exception class will work just
as well, though.
We can use this as follows:
class Test
{
static void Main()
{
var canceler = new RulyCanceler();
new Thread (() => {
try { Work (canceler); }
catch (OperationCanceledException)
{
Console.WriteLine ("Canceled!");
}
}).Start();
Thread.Sleep (1000);
canceler.Cancel(); // Safely cancel worker.
}
static void Work (RulyCanceler c)
{
while (true)
{
c.ThrowIfCancellationRequested();
// ...
try { OtherMethod (c); }
finally { /* any required cleanup */ }
}
}
static void OtherMethod (RulyCanceler c)
{
// Do stuff...
c.ThrowIfCancellationRequested();
}
}
We could simplify our example by eliminating the RulyCanceler
class and adding the static boolean field _cancelRequest
to the Test
class.
However, doing so would mean that if several threads called Work
at once, setting _cancelRequest
to true
would cancel all workers. Our RulyCanceler
class is therefore a useful abstraction. Its
only inelegance is that when we look at the Work
method’s signature, the intention is unclear:
static void Work (RulyCanceler c)
Might the Work
method itself
intend to call Cancel
on the RulyCanceler
object? In this instance, the answer is no, so it would be nice if this could
be enforced in the type system. Framework 4.0 provides cancellation tokens for this exact purpose.
Framework 4.0 provides two types that formalize the
cooperative cancellation pattern that we just demonstrated: CancellationTokenSource
and CancellationToken
.
The two types work in tandem:
- A
CancellationTokenSource
defines a Cancel
method.
- A
CancellationToken
defines an IsCancellationRequested
property and ThrowIfCancellationRequested
method.
Together, these amount to a more sophisticated version of
the RulyCanceler
class in our previous example. But
because the types are separate, you can isolate the ability to cancel from the
ability to check the cancellation flag.
To use these types, first instantiate a CancellationTokenSource
object:
var cancelSource = new CancellationTokenSource();
Then, pass its Token
property
into a method for which you’d like to support cancellation:
new Thread (() => Work (cancelSource.Token)).Start();
Here’s how Work
would be
defined:
void Work (CancellationToken cancelToken)
{
cancelToken.ThrowIfCancellationRequested();
...
}
When you want to cancel, simply call Cancel
on cancelSource
.
CancellationToken
is actually a
struct, although you can treat it like a class. When implicitly copied, the
copies behave identically and reference the original CancellationTokenSource
.
The CancellationToken
struct
provides two additional useful members. The first is WaitHandle
,
which returns a wait handle that’s
signaled when the token is canceled. The second is Register
,
which lets you register a callback delegate that will be fired upon cancellation.
Cancellation tokens are used within the .NET Framework
itself, most notably in the following classes:
Most of these classes’ use of cancellation tokens is in
their Wait
methods. For example, if you Wait
on a ManualResetEventSlim
and specify a cancellation token, another thread can Cancel
its wait. This is much tidier and safer than calling Interrupt
on the blocked thread.
A common problem in threading is how to lazily initialize
a shared field in a thread-safe fashion. The need arises when you have a field
of a type that’s expensive to construct:
class Foo
{
public readonly Expensive Expensive = new Expensive();
...
}
class Expensive { /* Suppose this is expensive to construct */ }
The problem with this code is that instantiating Foo
incurs the performance cost of instantiating Expensive
— whether or not the Expensive
field is ever accessed. The obvious answer is to construct the instance on
demand:
class Foo
{
Expensive _expensive;
public Expensive Expensive // Lazily instantiate Expensive
{
get
{
if (_expensive == null) _expensive = new Expensive();
return _expensive;
}
}
...
}
The question then arises, is this thread-safe? Aside from
the fact that we’re accessing _expensive
outside a lock without a memory
barrier, consider what would happen if two threads accessed this property
at once. They could both satisfy the if
statement’s
predicate and each thread end up with a different instance of Expensive
. As this may lead to subtle errors, we would
say, in general, that this code is not thread-safe.
The solution to the problem is to lock around checking and
initializing the object:
Expensive _expensive;
readonly object _expenseLock = new object();
public Expensive Expensive
{
get
{
lock (_expenseLock)
{
if (_expensive == null) _expensive = new Expensive();
return _expensive;
}
}
}
Framework 4.0 provides a new class called Lazy<T>
to help with lazy initialization. If
instantiated with an argument of true
, it implements
the thread-safe initialization pattern just described.
Lazy<T>
actually implements a
slightly more efficient version of this pattern, called double-checked locking. Double-checked locking
performs an additional volatile read to
avoid the cost of obtaining a lock if the object is
already initialized.
To use Lazy<T>
,
instantiate the class with a value factory delegate that tells it how to
initialize a new value, and the argument true
. Then
access its value via the Value
property:
Lazy<Expensive> _expensive = new Lazy<Expensive>
(() => new Expensive(), true);
public Expensive Expensive { get { return _expensive.Value; } }
If you pass false
into Lazy<T>
’s constructor, it implements the
thread-unsafe lazy initialization pattern that we described at the start of
this section — this makes sense when you want to use Lazy<T>
in a single-threaded context.
LazyInitializer
is a static
class that works exactly like Lazy<T>
except:
- Its functionality is exposed through a static method that
operates directly on a field in your own type. This avoids a level of
indirection, improving performance in cases where you need extreme
optimization.
- It offers another mode of initialization that has multiple
threads race to initialize.
To use LazyInitializer
, call EnsureInitialized
before accessing the field, passing a
reference to the field and the factory delegate:
Expensive _expensive;
public Expensive Expensive
{
get // Implement double-checked locking
{
LazyInitializer.EnsureInitialized (ref _expensive,
() => new Expensive());
return _expensive;
}
}
You can also pass in another argument to request that
competing threads race to initialize. This sounds similar to our
original thread-unsafe example, except that the first thread to finish always
wins — and so you end up with only one instance. The advantage of this technique
is that it’s even faster (on multicores) than double-checked locking — because it
can be implemented entirely without locks. This is an extreme optimization that
you rarely need, and one that comes at a cost:
- It’s slower when more threads race to initialize than you have
cores.
- It potentially wastes CPU resources performing redundant
initialization.
- The initialization logic must be thread-safe (in this case, it
would be thread-unsafe if
Expensive
’s constructor
wrote to static fields, for instance).
- If the initializer instantiates an object requiring disposal, the
“wasted” object won’t get disposed without additional logic.
For reference, here’s how double-checked locking is
implemented:
volatile Expensive _expensive;
public Expensive Expensive
{
get
{
if (_expensive == null) // First check (outside lock)
lock (_expenseLock)
if (_expensive == null) // Second check (inside lock)
_expensive = new Expensive();
return _expensive;
}
}
And here’s how the race-to-initialize pattern is
implemented:
volatile Expensive _expensive;
public Expensive Expensive
{
get
{
if (_expensive == null)
{
var instance = new Expensive();
Interlocked.CompareExchange (ref _expensive, instance, null);
}
return _expensive;
}
}
Much of this article has focused on synchronization
constructs and the issues arising from having threads concurrently access the
same data. Sometimes, however, you want to keep data isolated, ensuring that
each thread has a separate copy. Local variables achieve exactly this, but they
are useful only with transient data.
The solution is thread-local
storage. You might be hard-pressed to think of a requirement: data you’d
want to keep isolated to a thread tends to be transient by nature. Its main
application is for storing “out-of-band” data — that which supports the execution
path’s infrastructure, such as messaging, transaction, and security tokens.
Passing such data around in method parameters is extremely clumsy and alienates
all but your own methods; storing such information in ordinary static fields
means sharing it among all threads.
Thread-local storage can also be useful in optimizing parallel code. It allows
each thread to exclusively access its own version of a thread-unsafe object
without needing locks — and without needing to reconstruct that object between
method calls.
There are three ways to implement thread-local storage.
The easiest approach to thread-local storage is to mark a
static field with the ThreadStatic
attribute:
[ThreadStatic] static int _x;
Each thread then sees a separate copy of _x
.
Unfortunately, [ThreadStatic]
doesn’t work with instance fields (it simply does nothing); nor does it play
well with field initializers — they execute only once on the thread
that's running when the static constructor executes. If you need to work with
instance fields — or start with a nondefault value — ThreadLocal<T>
provides a better option.
ThreadLocal<T>
is new to
Framework 4.0. It provides thread-local storage for both static and instance
fields — and allows you to specify default values.
Here’s how to create a ThreadLocal<int>
with a default value of 3
for each thread:
static ThreadLocal<int> _x = new ThreadLocal<int> (() => 3);
You then use _x
’s Value
property to get or set its thread-local value. A
bonus of using ThreadLocal
is that values are lazily
evaluated: the factory function evaluates on the first call (for each thread).
ThreadLocal<T> and instance fields
ThreadLocal<T>
is also
useful with instance fields and captured local variables. For example, consider
the problem of generating random numbers in a multithreaded environment. The Random
class is not thread-safe, so we have to either lock
around using Random
(limiting concurrency) or
generate a separate Random
object for each thread. ThreadLocal<T>
makes the latter easy:
var localRandom = new ThreadLocal<Random>(() => new Random());
Console.WriteLine (localRandom.Value.Next());
Our factory function for creating the Random
object is a bit simplistic, though, in that Random
’s
parameterless constructor relies on the system clock for a random number seed.
This may be the same for two Random
objects created
within ~10 ms of each other. Here’s one way to fix it:
var localRandom = new ThreadLocal<Random>
( () => new Random (Guid.NewGuid().GetHashCode()) );
We’ll use this in Part 5 (see the parallel spellchecking example in
“PLINQ”).
The third approach is to use two methods in the Thread
class: GetData
and SetData
. These store data in thread-specific “slots”. Thread.GetData
reads from a thread’s isolated data store; Thread.SetData
writes to it. Both methods require a LocalDataStoreSlot
object to identify the slot. The same
slot can be used across all threads and they’ll still get separate values.
Here’s an example:
class Test
{
// The same LocalDataStoreSlot object can be used across all threads.
LocalDataStoreSlot _secSlot = Thread.GetNamedDataSlot ("securityLevel");
// This property has a separate value on each thread.
int SecurityLevel
{
get
{
object data = Thread.GetData (_secSlot);
return data == null ? 0 : (int) data; // null == uninitialized
}
set { Thread.SetData (_secSlot, value); }
}
...
In this instance, we called Thread.GetNamedDataSlot
,
which creates a named slot — this allows sharing of that slot across the
application. Alternatively, you can control a slot’s scope yourself with an
unnamed slot, obtained by calling Thread.AllocateDataSlot
:
class Test
{
LocalDataStoreSlot _secSlot = Thread.AllocateDataSlot();
...
Thread.FreeNamedDataSlot
will
release a named data slot across all threads, but only once all references to
that LocalDataStoreSlot
have dropped out of scope
and have been garbage-collected. This ensures that threads don’t get data slots
pulled out from under their feet, as long as they keep a reference to the
appropriate LocalDataStoreSlot
object while the slot
is needed.
If you need to execute some method repeatedly at regular
intervals, the easiest way is with a timer.
Timers are convenient and efficient in their use of memory and
resources — compared with techniques such as the following:
new Thread (delegate() {
while (enabled)
{
DoSomeAction();
Thread.Sleep (TimeSpan.FromHours (24));
}
}).Start();
Not only does this permanently tie up a thread resource,
but without additional coding, DoSomeAction
will
happen at a later time each day. Timers solve these problems.
The .NET Framework provides four timers. Two of these are
general-purpose multithreaded timers:
-
System.Threading.Timer
-
System.Timers.Timer
The other two are special-purpose single-threaded timers:
-
System.Windows.Forms.Timer
(Windows
Forms timer)
-
System.Windows.Threading.DispatcherTimer
(WPF timer)
The multithreaded timers are more powerful, accurate, and
flexible; the single-threaded timers are safer and more convenient for running
simple tasks that update Windows Forms controls or WPF elements.
System.Threading.Timer
is the
simplest multithreaded timer: it has just a constructor and two methods (a
delight for minimalists, as well as book authors!). In the following example, a
timer calls the Tick
method, which writes “tick...”
after five seconds have elapsed, and then every second after that, until the
user presses Enter:
using System;
using System.Threading;
class Program
{
static void Main()
{
// First interval = 5000ms; subsequent intervals = 1000ms
Timer tmr = new Timer (Tick, "tick...", 5000, 1000);
Console.ReadLine();
tmr.Dispose(); // This both stops the timer and cleans up.
}
static void Tick (object data)
{
// This runs on a pooled thread
Console.WriteLine (data); // Writes "tick..."
}
}
You can change a timer’s interval later by calling its Change
method. If you want a timer to fire just once,
specify Timeout.Infinite
in the constructor’s last
argument.
The .NET Framework provides another timer class of the
same name in the System.Timers
namespace. This
simply wraps the System.Threading.Timer
, providing
additional convenience while using the identical underlying engine. Here’s a
summary of its added features:
- A
Component
implementation, allowing
it to be sited in Visual Studio’s designer
- An
Interval
property instead of a Change
method
- An
Elapsed
event instead of a callback delegate
- An
Enabled
property to start and stop
the timer (its default value being false
)
-
Start
and Stop
methods in case you’re confused by Enabled
- An
AutoReset
flag for indicating a
recurring event (default value is true
)
- A
SynchronizingObject
property with Invoke
and BeginInvoke
methods
for safely calling methods on WPF elements
and Windows Forms controls
Here’s an example:
using System;
using System.Timers; // Timers namespace rather than Threading
class SystemTimer
{
static void Main()
{
Timer tmr = new Timer(); // Doesn't require any args
tmr.Interval = 500;
tmr.Elapsed += tmr_Elapsed; // Uses an event instead of a delegate
tmr.Start(); // Start the timer
Console.ReadLine();
tmr.Stop(); // Stop the timer
Console.ReadLine();
tmr.Start(); // Restart the timer
Console.ReadLine();
tmr.Dispose(); // Permanently stop the timer
}
static void tmr_Elapsed (object sender, EventArgs e)
{
Console.WriteLine ("Tick");
}
}
Multithreaded timers use the thread
pool to allow a few threads to serve many timers. This means that the callback
method or Elapsed
event may fire on a different thread
each time it is called. Furthermore, Elapsed
always
fires (approximately) on time — regardless of whether the previous Elapsed
has finished executing. Hence, callbacks or event
handlers must be thread-safe.
The precision of multithreaded timers depends on the
operating system, and is typically in the 10–20 ms region. If you need greater
precision, you can use native interop and call the Windows multimedia timer.
This has precision down to 1 ms and it is defined in winmm.dll.
First call timeBeginPeriod
to inform the operating
system that you need high timing precision, and then call timeSetEvent
to start a multimedia timer. When you’re done, call timeKillEvent
to stop the timer and timeEndPeriod
to inform the OS
that you no longer need high timing precision. You can find complete examples
on the Internet that use the multimedia timer by searching for the keywords dllimport
winmm.dll timesetevent.
The .NET Framework provides timers designed to eliminate thread-safety issues for WPF and Windows
Forms applications:
-
System.Windows.Threading.DispatcherTimer
(WPF)
-
System.Windows.Forms.Timer
(Windows
Forms)
The single-threaded timers are not designed to work
outside their respective environments. If you use a Windows Forms timer in a
Windows Service application, for instance, the Timer
event won’t fire!
Both are like System.Timers.Timer
in the members that they expose (Interval
, Tick
, Start
, and Stop
) and are used in a similar manner. However, they
differ in how they work internally. Instead of using the thread pool to generate timer events, the WPF and
Windows Forms timers rely on the message pumping mechanism of their underlying
user interface model. This means that the Tick
event
always fires on the same thread that originally created the timer — which, in a
normal application, is the same thread used to manage all user interface
elements and controls. This has a number of benefits:
- You can forget about thread safety.
- A fresh
Tick
will never fire until the
previous Tick
has finished processing.
- You can update user interface elements and controls directly from
Tick
event handling code, without calling Control.Invoke
or Dispatcher.Invoke
.
It sounds too good to be true, until you realize that a
program employing these timers is not really multithreaded — there is no
parallel execution. One thread serves all timers — as well as the processing
UI events. This brings us to the disadvantage of single-threaded timers:
- Unless the
Tick
event handler executes
quickly, the user interface becomes unresponsive.
This makes the WPF and Windows Forms timers suitable for
only small jobs, typically those that involve updating some aspect of the user
interface (e.g., a clock or countdown display). Otherwise, you need a
multithreaded timer.
In terms of precision, the single-threaded timers are
similar to the multithreaded timers (tens of milliseconds), although they are
typically less accurate, because they can be delayed while other user
interface requests (or other timer events) are processed.
<< Part 2 Part 4 >>
Threading in C# is from Chapters 21 and 22 of C# 4.0 in a Nutshell.
© 2006-2014 Joseph Albahari, O'Reilly Media, Inc. All rights reserved