Double-checked lockingIn software engineering, double-checked locking (also known as "double-checked locking optimization"[1]) is a software design pattern used to reduce the overhead of acquiring a lock by testing the locking criterion (the "lock hint") before acquiring the lock. Locking occurs only if the locking criterion check indicates that locking is required. The original form of the pattern, appearing in Pattern Languages of Program Design 3,[2] has data races, depending on the memory model in use, and it is hard to get right. Some consider it to be an anti-pattern.[3] There are valid forms of the pattern, including the use of the The pattern is typically used to reduce locking overhead when implementing "lazy initialization" in a multi-threaded environment, especially as part of the Singleton pattern. Lazy initialization avoids initializing a value until the first time it is accessed. Motivation and original patternConsider, for example, this code segment in the Java programming language:[4] // Single-threaded version
class Foo {
private static Helper helper;
public Helper getHelper() {
if (helper == null) {
helper = new Helper();
}
return helper;
}
// other functions and members...
}
The problem is that this does not work when using multiple threads. A lock must be obtained in case two threads call Synchronizing with a lock can fix this, as is shown in the following example: // Correct but possibly expensive multithreaded version
class Foo {
private Helper helper;
public synchronized Helper getHelper() {
if (helper == null) {
helper = new Helper();
}
return helper;
}
// other functions and members...
}
This is correct and will most likely have sufficient performance. However, the first call to
// Broken multithreaded version
// original "Double-Checked Locking" idiom
class Foo {
private Helper helper;
public Helper getHelper() {
if (helper == null) {
synchronized (this) {
if (helper == null) {
helper = new Helper();
}
}
}
return helper;
}
// other functions and members...
}
Intuitively, this algorithm is an efficient solution to the problem. But if the pattern is not written carefully, it will have a data race. For example, consider the following sequence of events:
Most runtimes have memory barriers or other methods for managing memory visibility across execution units. Without a detailed understanding of the language's behavior in this area, the algorithm is difficult to implement correctly. One of the dangers of using double-checked locking is that even a naive implementation will appear to work most of the time: it is not easy to distinguish between a correct implementation of the technique and one that has subtle problems. Depending on the compiler, the interleaving of threads by the scheduler and the nature of other concurrent system activity, failures resulting from an incorrect implementation of double-checked locking may only occur intermittently. Reproducing the failures can be difficult. Usage in C++11For the singleton pattern, double-checked locking is not needed:
Singleton& GetInstance() {
static Singleton s;
return s;
}
C++11 and beyond also provide a built-in double-checked locking pattern in the form of #include <mutex>
#include <optional> // Since C++17
// Singleton.h
class Singleton {
public:
static Singleton* GetInstance();
private:
Singleton() = default;
static std::optional<Singleton> s_instance;
static std::once_flag s_flag;
};
// Singleton.cpp
std::optional<Singleton> Singleton::s_instance;
std::once_flag Singleton::s_flag{};
Singleton* Singleton::GetInstance() {
std::call_once(Singleton::s_flag,
[]() { s_instance.emplace(Singleton{}); });
return &*s_instance;
}
If one truly wishes to use the double-checked idiom instead of the trivially working example above (for instance because Visual Studio before the 2015 release did not implement the C++11 standard's language about concurrent initialization quoted above [7] ), one needs to use acquire and release fences:[8] #include <atomic>
#include <mutex>
class Singleton {
public:
static Singleton* GetInstance();
private:
Singleton() = default;
static std::atomic<Singleton*> s_instance;
static std::mutex s_mutex;
};
Singleton* Singleton::GetInstance() {
Singleton* p = s_instance.load(std::memory_order_acquire);
if (p == nullptr) { // 1st check
std::lock_guard<std::mutex> lock(s_mutex);
p = s_instance.load(std::memory_order_relaxed);
if (p == nullptr) { // 2nd (double) check
p = new Singleton();
s_instance.store(p, std::memory_order_release);
}
}
return p;
}
Usage in POSIX
Usage in Gopackage main
import "sync"
var arrOnce sync.Once
var arr []int
// getArr retrieves arr, lazily initializing on first call. Double-checked
// locking is implemented with the sync.Once library function. The first
// goroutine to win the race to call Do() will initialize the array, while
// others will block until Do() has completed. After Do has run, only a
// single atomic comparison will be required to get the array.
func getArr() []int {
arrOnce.Do(func() {
arr = []int{0, 1, 2}
})
return arr
}
func main() {
// thanks to double-checked locking, two goroutines attempting to getArr()
// will not cause double-initialization
go getArr()
go getArr()
}
Usage in JavaAs of J2SE 5.0, the volatile keyword is defined to create a memory barrier. This allows a solution that ensures that multiple threads handle the singleton instance correctly. This new idiom is described in [3] and [4]. // Works with acquire/release semantics for volatile in Java 1.5 and later
// Broken under Java 1.4 and earlier semantics for volatile
class Foo {
private volatile Helper helper;
public Helper getHelper() {
Helper localRef = helper;
if (localRef == null) {
synchronized (this) {
localRef = helper;
if (localRef == null) {
helper = localRef = new Helper();
}
}
}
return localRef;
}
// other functions and members...
}
Note the local variable "localRef", which seems unnecessary. The effect of this is that in cases where helper is already initialized (i.e., most of the time), the volatile field is only accessed once (due to "return localRef;" instead of "return helper;"), which can improve the method's overall performance by as much as 40 percent.[9] Java 9 introduced the // Works with acquire/release semantics for VarHandles introduced in Java 9
class Foo {
private volatile Helper helper;
public Helper getHelper() {
Helper localRef = getHelperAcquire();
if (localRef == null) {
synchronized (this) {
localRef = getHelperAcquire();
if (localRef == null) {
localRef = new Helper();
setHelperRelease(localRef);
}
}
}
return localRef;
}
private static final VarHandle HELPER;
private Helper getHelperAcquire() {
return (Helper) HELPER.getAcquire(this);
}
private void setHelperRelease(Helper value) {
HELPER.setRelease(this, value);
}
static {
try {
MethodHandles.Lookup lookup = MethodHandles.lookup();
HELPER = lookup.findVarHandle(Foo.class, "helper", Helper.class);
} catch (ReflectiveOperationException e) {
throw new ExceptionInInitializerError(e);
}
}
// other functions and members...
}
If the helper object is static (one per class loader), an alternative is the initialization-on-demand holder idiom[11] (See Listing 16.6[12] from the previously cited text.) // Correct lazy initialization in Java
class Foo {
private static class HelperHolder {
public static final Helper helper = new Helper();
}
public static Helper getHelper() {
return HelperHolder.helper;
}
}
This relies on the fact that nested classes are not loaded until they are referenced. Semantics of final field in Java 5 can be employed to safely publish the helper object without using volatile:[13] public class FinalWrapper<T> {
public final T value;
public FinalWrapper(T value) {
this.value = value;
}
}
public class Foo {
private FinalWrapper<Helper> helperWrapper;
public Helper getHelper() {
FinalWrapper<Helper> tempWrapper = helperWrapper;
if (tempWrapper == null) {
synchronized (this) {
if (helperWrapper == null) {
helperWrapper = new FinalWrapper<Helper>(new Helper());
}
tempWrapper = helperWrapper;
}
}
return tempWrapper.value;
}
}
The local variable tempWrapper is required for correctness: simply using helperWrapper for both null checks and the return statement could fail due to read reordering allowed under the Java Memory Model.[14] Performance of this implementation is not necessarily better than the volatile implementation. Usage in C#In .NET Framework 4.0, the public class MySingleton
{
private static readonly Lazy<MySingleton> _mySingleton = new Lazy<MySingleton>(() => new MySingleton());
private MySingleton() { }
public static MySingleton Instance => _mySingleton.Value;
}
See also
References
External links
|
Portal di Ensiklopedia Dunia