2. Post increment and pre-increment are atomic operation ? Link
What is atomic operation ?
An atomic operation is one that can be performed in a single step, without the possibility of interruption. Actually, this is more of like OS concept rather programming concept. Let’s imagine that if two threads(t1 and t2) try to increment same variable on some memory location , let’s t1 increment the value of the variable but before it can write the new value back to the memory location it is suspended, and the t2 is allowed to run:
the t2 reads the value in memory location, the same value that the t1 read;
the t2 adds one to the value; the t2 writes the new value into the memory location.
Is post increment atomic operation ?
“Atomic” comes from Latin and means “indivisible”. The increment operator is not indivisible; it has 3 separate parts:
1. Load value from memory into register.
2. Increment register.
3. Store value from register back into memory.
So, for example, if you have two threads that try to increment a number, if the number starts from zero you could get either 1 or 2 as a result, at random. You get 1 when an interleaving similar to the below happens:
Thread 1 loads zero into a register.
Thread 2 loads zero into a register.
Thread 1 increments register (0 -> 1).
Thread 2 increments register (0 -> 1).
Thread 1 writes the value “1” into memory.
Thread 2 writes the value “1” into memory.
In many languages there are special variants of the increment operator that are atomic. Normally they are supported by the hardware itself (this is harder than it sounds; AFAIK it’s not just the CPU, but the caches and main memory and everything needs to work together to provide this). You should either use those, or manual synchronization (say, a lock - so that the increments run start to finish, and others are prohibited from starting while another one is running).
Is pre increment atomic operation ?
How do vectors work internally in C++?
Vectors are implemented as dynamic arrays.
Dynamic array is an array in heap memory of size capacity. For simplicity let’s assume here and further that its capacity is power of 2. Dynamic array as an abstract object stores size elements, which may be less then capacity. Dynamic array, or stl vector, besides basic accessors, supports following operations:
push_back(element). Adds element to the end of the array. If size < capacity, it puts the element at the index [size] of internal array, and increments size. The complexity of this operation is O(1). If size = capacity, it creates new array in heap of capacity double of previous capacity. Then it copies all elements from previous array into new array and deletes previous array. Then it adds new element. The complexity of such operation is O(n), however amortized complexity of adding n elements into a vector is O(n).
pop_back(). Decrements size. This operation will not decrease the capacity of vector similar to operation above because doing so can make the complexity of n push-pop operations to be O(n^2).
shrink_to_fit(). After popping a lot of elements one can call this method in order to decrease capacity of a vector. This method should not be called each time after popping: the complexity of n pop-shrink-push operations can be O(n^2).
reserve(number of elements). In the case when the size that dynamic array will need is known, one can call this method to increase its capacity, so that dynamic array skips multiple relocations of elements.
Implementation of dynamic array may support more operations, and its capacity may not be power of 2: it may increase its capacity by a different factor, for example 1.5. Dynamic arrays are perfect candidates for stacks and, with some modification, can also be used for queues.
Another answer from wiki:
A typical vector implementation consists, internally, of a pointer to a dynamically allocated array,[2] and possibly data members holding the capacity and size of the vector. The size of the vector refers to the actual number of elements, while the capacity refers to the size of the internal array. When new elements are inserted, if the new size of the vector becomes larger than its capacity, reallocation occurs.[2][4] This typically causes the vector to allocate a new region of storage, move the previously held elements to the new region of storage, and free the old region. Because the addresses of the elements change during this process, any references or iterators to elements in the vector become invalidated.[5] Using an invalidated reference causes undefined behaviour.
How std::map is implemented in C++ ?
std::map is internally implemented using Red black tree.
Probably the two most common self balancing tree algorithms are Red-Black trees and AVL trees. To balance the tree after an insertion/update both algorithms use the notion of rotations where the nodes of the tree are rotated to perform the re-balancing.
While in both algorithms the insert/delete operations are O(log n), in the case of Red-Black tree re-balancing rotation is an O(1) operation while with AVL this is a O(log n) operation, making the Red-Black tree more efficient in this aspect of the re-balancing stage and one of the possible reasons that it is more commonly used.
Red-Black trees are used in most collection libraries, including the offerings from Java and Microsoft .NET Framework.
[2] Answer:
It really depends on the usage. AVL tree usually has more rotations of rebalancing. So if your application doesn't have too many insertion and deletion operations, but weights heavily on searching, then AVL tree probably is a good choice.
std::map uses Red-Black tree as it gets a reasonable trade-off between the speed of node insertion/deletion and searching.
Why not a hash table?
A type only requires < operator (comparison) to be used as a key in a tree. However, hash tables require that each key type has a hash function defined. Keeping type requirements to a minimum is very important for generic programming so you can use it with a wide variety of types and algorithms.
Designing a good hash table requires intimate knowledge of the context it which it will be used. Should it use open addressing, or linked chaining? What levels of load should it accept before resizing? Should it use an expensive hash that avoids collisions, or one that is rough and fast?
Since the STL can't anticipate which is the best choice for your application, the default needs to be more flexible. Trees "just work" and scale nicely.
(C++11 did add hash tables with unordered_map. You can see from the documentation it requires setting policies to configure many of these options.)
What about other trees?
Red Black trees offer fast lookup and are self balancing, unlike BSTs. Another user pointed out its advantages over the self-balancing AVL tree.
Alexander Stepanov (The creator of STL) said that he would use a B* Tree instead of a Red-Black tree if he wrote std::map again, because it is more friendly for modern memory caches.
One of the biggest changes since then has been the growth of caches. Cache misses are very costly, so locality of reference is much more important now. Node-based data structures, which have low locality of reference, make much less sense. If I were designing STL today, I would have a different set of containers. For example, an in-memory B*-tree is a far better choice than a red-black tree for implementing an associative container. - Alexander Stepanov
Should maps always use trees?
Another possible maps implementation would be a sorted vector (insertion sort) and binary search. This would work well for containers which aren't modified often but are queried frequently. I often do this in C as qsort and bsearch are built in.
Do I even need to use map?
Cache considerations mean it rarely makes sense to use std::list or std::deque over std:vector even for those situations we were taught in school (such as removing an element from the middle of the list). Applying that same reasoning, using a for loop to linear search a list is often more efficient and cleaner than building a map for a few lookups.
Of course choosing a readable container is usually more important than performance.
The 2 most popular tree are AVL and Red Black (RB). The main difference lie in the utilization:
AVL : Better if ratio of consultation (read) is bigger than manipulation (modification). Memory foot print is a little less than RB (due to the bit required for coloring).
RB : Better in general cases where there is a balance between consultation (read) and manipulation (modification) or more modification over consultation. A slightly bigger memory footprint due to the storing of red-black flag.
The main difference come from the coloring. You do have less re-balance action in RB tree than AVL because the coloring enable you to sometimes skip or shorten re-balance actions which have a relative hi cost. Because of the coloring, RB tree also have higher level of nodes because it could accept red nodes between black ones (having the possibilities of ~2x more levels) making search (read) a little bit less efficient... but because it is a constant (2x), it stay in O(log n).
If you consider the performance hit for a modification of a tree (significative) VS the performance hit of consultation of a tree (almost insignificant), it become natural to prefer RB over AVL for a general case.
Find the unique elements of a vector C++
So for small enough n (<=1e8) sorting and removal (using std::sort() and std::unique) approach is still faster than hash tables.
Sample code: O(n log n)
vector<int>A = {1,2,3,1,2,5};
sort(A.begin(),A.end());
A.erase(unique(A.begin(),A.end()),A.end());
for(int&x:A)
cout<<x<<" ";
1
if your elements are hashable, you can use a std::unordered_map<T, int> to store the count of each element, which will take amortized linear time:
For small lists, sorting and then doing a linear pass might still be faster. Also note that this copies your elements, which might also be expensive in some cases
Which one we should use std::vector or std::list ?
Consider the following points...
1] Traversal:
List nodes are scattered everywhere in memory and hence list traversal leads to cache misses. But traversal of vectors is smooth.
2] Insertion and Deletion:
Average 50% of elements must be shifted when you do that to a Vector but caches are very good at that! But, with lists, you need to traverse to the point of insertion/deletion... so again... cache misses! And surprisingly vectors win this case as well!
3] Storage:
When you use lists, there are 2 pointers per elements(forward & backward) so a List is much bigger than a Vector! Vectors need just a little more memory than the actual elements need.
Yout should have a reason not to use a vector.
std::vector is insanely faster than std::list to find an element
std::vector always performs faster than std::list with very small data
std::vector is always faster to push elements at the back than std::list
std::list handles large elements very well, especially for sorting or inserting in the front
Note: If you want to learn more about performance, I would recommend to see this
Is shared_ptr is thread- Safe ?
A std::shared_ptr consists of a control block and its resource. The control block is thread-safe, but access to the resource is not. This means modifying the reference counter is an atomic operation and you have the guarantee that the resource is deleted exactly once. These are the guarantees std::shared_ptr gives you.
On the contrary, it is crucial that a std::shared_ptr has well-defined multithreading semantics. At first glance, the use of a std::shared_ptr does not appear to be a sensible choice for multithreaded code. It is by definition shared and mutable and is the ideal candidate for non-synchronized read and write operations and hence for undefined behavior. On the other hand, there is the guideline in modern C++: Don't use raw pointers. This means, consequently, that you should use smart pointers in multithreading programs when you want to model shared ownership.
Can you implement atomic operation for integer variable?
Both are smart pointers which will release resource when it is no longer referenced.
a)unique_ptr
Use unique_ptr when if you want to have single ownership(Exclusive) of resource. Only one unique_ptr can point to one resource. Since there can be one unique_ptr for single resource its not possible to copy one unique_ptr to another.
b)shared_ptr
Use shared_ptr if you want to share ownership of resource . Many shared_ptr can point to single resource. shared_ptr maintains reference count for this propose. when all shared_ptr's pointing to resource goes out of scope the resource is destroyed.
The idea between both auto_ptr and unique_ptr is to grant a single ownership to a dynamic object.
In lanaguages like C++ that pretend to be object oriented but does not have an implicit memory management, if you cerate a random network of pointed-objects you risk to loose control upon object deletion, since it is not anymore clear who and when should call delete upon an object that had been constructed via new.
One of the ways to avoid that problem is introduce a discipline about who (among any number of pointers that may refer to an object) owns the object in the sense that it will take care of its deletion upon its own destruction.
imagine a situation like this:
void function()
{
Object* p = new AnObject();
something_may_trow(p);
delete p;
}
Imagine that “something_may_thow” in fact … throws. delete p will never be executed and p will become inaccessible.
The idea of smart pointers starts from a concept like this:
template<class T>
class ptr
{
T* p;
public:
ptr(T* s) :p(s) {}
T* get() const { return p; }
T* operator->() const { return p; }
T& operator*() const { return *p; }
~ptr() { delete p; }
};
Now re-think the function as
void function()
{
ptr<Object> p(new AnObject());
something_may_trow(p.get());
}
No need to delete p (p’s destructor will) and no need to cleanup in case of premature exit (in case of throw, the function scope is exited, hence p destroyed, and since it’s destructor calls delete … everything always work.
The problem with a ptr class like that, is that if you want to return p from the function and assign it to another variable you go into trouble:
Default copy and assignment will copy the inner pointer, so that there are two smart pointers -in different scopes- each pretending to delete a same object. This will lead to a pointer pointing to a deleted object first and to a double deletion later.
In C++98 and 03, no “r-value reference” existed, so copy and assign where essentially used to implement a “move”.
template<class T>
class auto_ptr
{
T* p;
public:
auto_ptr(T* s) :p(s) {}
~auto_ptr() { delete p; }
auto_ptr(auto_ptr& a) // note: not const
{ p = a.p; a.p = NULL; }
auto_ptr& operator=(auto_ptr& a) // note: non const
{ delete p: p=a.p; a.p = NULL; return *this; }
T& operator*() const { return *p; }
T* operator->() const { return p; }
};
Now if you do an assignment like pa = pb you’ll have pa getting the inner pb pointer and pb becoming null. There is always one and only one “active pointer” that can delete your object.
Same if you do return p: The copy from the inner to the outer scope (if not elided) and the further assignment to the final variable (like in a=fn(); ) will transfer the inner pointer to the final variable (that will manage deletion), letting all the intermediate to null (so not deleting anything).
The problem with a class like auto_ptr is that is impossible to manage deep-copy and it is impossible to use it in generic container whose implementation presume the capability to copy objects from one place to another or to duplicate objects.
In other words, using copy to implement move makes impossible to discern if a class is suitable for copy or for move. And when the language does not allow a distinct “move” operation (in C++ 98 return does a semantic copy) this leads to subtle bugs: you write (in generic code) a=b and don’t know if this will change b or not.
C++11 introduced r-value reference, that together with type deduction and binding rules, allow to distinguish move from copy: non-temoprary object are bind as reference while temporary objects are bind as r-value reference. And std::move cast a non-temporary (l-value) to a temporary (r-value).
Containers had been re-implemented by moving objects from place to place (instead of copying them and delete the old), and return now returns a temporary (so that a = fn(), can move the returned value into a, but a = b will make a copy of b into a) .
In other words, since C++11, copy and move are two distinct semantics, but since auto_ptr use copy to implement a move … becomes inconsistent.
unique_ptr does not have a copy constructor (it is deleted by the existence of the move one) and a copy assignment (is deleted by the transfer one).
So pa = pb will produce a compile error, and you must always do pa=std::move(pb).
At the same time pa = fn() works, being the return of fn temporary by definition, so it will be moved into the s parameter of operator=, nullifying the one inside fn, and then trasfered from the returned pointer into pa, nullifying s.
Writing generic code becomes safe: a=b and a=std::move(b) will call two well distinct functions, and the two operation can be distinct. And if one of the two is not implemented, you’ll get a corresponding compiler error, and not the “wrong one” in place of the other.
Originally Answered: What is the scenario where c++ make_shared better than shared_ptr constructor?
Although this feels like a homework question, this really has a surprising little bit of subtlety.
I’m going to ignore the std::shared_ptr<> constructors that default-construct an empty shared pointer, or which construct a new reference to an existing shared pointer. Those are clearly separate from the service std::make_shared<> provides.
std::shared_ptr<foo>(foo *) constructs a new shared pointer, pointing to an object foo allocated and constructed elsewhere. In order to track the shared pointer ownership, the constructor allocates a control block.
The shared pointer itself holds a pointer to the object (for quick access), and a pointer to the control block.
The control block holds:
A pointer to the object being held, or the object itself.
The allocator for this object.
The deallocator for this object.
A strong reference count: the number of std::shared_ptr<foo> that point at this specific object.
A weak reference count: the number of std::weak_ptr<foo> that point at this specific object.
Now the first thing you might notice is that the control block could hold the object itself. That’s where std::make_shared<foo> comes in.
If you create a shared pointer from a pointer created elsewhere, the control block has no choice but to store a pointer to it. (Let’s assume copying and moving the object are out of the question.) That means for single std::shared_ptr, you have a minimum of three allocations:
The object itself.
The control block.
The std::shared_ptr<> objects that point to both.
The std::make_shared<> cuts one of these out, by combining the object with the control block. If you have a lot of shared pointers, and a low sharing ratio, that seems like a win, doesn’t it? After all, control blocks aren’t very big, and so the overhead of allocation a lot of them relative to your objects could really add up.
Well, that’s where the last bullet could bite you.
The object gets deleted when the strong reference count drops to 0. The control block, however, gets deleted when both the strong reference and weak reference count drop to 0.
std::weak_ptr<foo> also points to the same control block. It needs the control block so it can determine if the object is still alive when you call lock() on the pointer. That’s why the control block maintains a weak reference count separately of the strong count.
So, consider this code fragment:
auto sp = std::shared_ptr<LargeObject>(new LargeObject());
auto wp = std::weak_ptr<LargeObject>(sp);
sp.reset();
We first create a shared pointer to a LargeObject, and then make a weak reference to the same object.
For the coup de grace, we reset the shared pointer. This calls delete on the LargeObject, freeing that storage, while retaining the control block.
Now consider this code fragment:
auto sp = std::make_shared<LargeObject>();
auto wp = std::weak_ptr<LargeObject>(sp);
sp.reset();
It’s very similar, only we’ve replaced the call to new by converting to std::make_shared. This version constructs both LargeObject and the control block as a single allocation.
Now when we reset the shared pointer, it only calls the object’s destructor. The space occupied by the object is not freed. It can’t be, as it’s part of the same allocation that holds the control block! That storage remains alive as long as the weak pointer is alive.
As a result, I’d say that std::make_shared is preferred under most circumstances. But, if your usage pattern meets these two criteria:
The object being pointed to is large, and
You also take weak pointers to that object,
…then you might consider allocating the object separately and initializing the std::shared_ptr from that.
What are the design advantages of std::unique_ptr in C++?
not sure what you mean by “design advantages” but a couple of advantages I have in mind:
We can safely assume that std::unique_ptr has the same size as a raw pointer (disclaimer: this is not true in general, specifically if the default action is not delete anymore). So memory and performance should not be an issue.
std::cout << "Size p = " << sizeof(p) << std::endl;
return 0;
}
Size unique = 8
Size p = 8
It implements the exclusive ownership semantic. A unique pointer owns what it points to, i.e.: coping a unique_ptr to another is not allowed (you’d have two unique pointer to the same resource which makes the pointer not unique anymore, right?). You should be damn safe!
On destruction, a unique_ptr destroy/deallocate the resource it points to. You’re kind of safe (but please, keep your guard high). unique_ptr, by default, call delete on the resource pointed to.
In shallow copy, an object is created by simply copying the data of all variables of the original object.This works well if none of the variables of the object are defined in the heap section of memory.If some variables are dynamically allocated memory from heap section, then copied object variable will also reference then same memory location.
This will create ambiguity and run-time errors dangling pointer.Since both objects will reference to the same memory location, then change made by one will reflect those change in another object as well. Since we wanted to create a replica of the object, this purpose will not be filled by Shallow copy.
Deep copy makes a new and separate copy of an entire object with its unique memory address.Deep copy is comparatively slower.
Virtual function does not work inside the constructor, the below is an example for this.
#include <iostream>
using namespace std;
class base{
private:
int value;
public:
base(int x){
value=x;
fun();
}
virtual void fun(){
cout<<"Base::fun()"<<endl;
}
};
class derived:public base{
private:
int a;
public:
derived(int x, int y):base(x){
base *b;
b=this;
b->fun(); //calls derived::fun()
}
void fun(){
cout<<"derived::fun()”<<endl;
}
};
int main() {
// Write C++ code here
base *b = new derived(10,20);
b->fun();
return 0;
}
output:
Base::fun()
derived::fun()
derived::fun()
How do you allocate and deallocate one dimensional memory in c++ ?
int value=new int; //allocates memory for storing 1 integer
delete value; // deallocates memory taken by value
int *arr=new int[10]; //allocates memory for storing 10 int
delete []arr; // deallocates memory occupied by arr
Compare compile time polymorphism and Runtime polymorphism
What do you know about friend class and friend function?
Define inline function
What do you mean by abstraction in C++?
Is deconstructor overloading possible? If yes then explain and if no then why?
What is an abstract class and when do you use it?
What is a copy constructor?
What is the difference between shallow copy and deep copy?
What is the difference between virtual functions and pure virtual functions?
If class D is derived from a base class B. When creating an object of type D in what order would the constructors of these classes get called?
Can we call a virtual function from a constructor?
#include<iostream>
using namespace std;
int main(){
int a=1;
int x=(a++)++;
cout<<x<<endl;
return 0;
}
Output:
It will throw compile error as ++ can be applied on lvalue
What will be output of the below program.
#include<iostream>
using namespace std;
class A{
public:
virtual void a()=0;
A(){
cout<<"A ";
}
};
class B: public A
{
public:
B(){
cout<<"B ";
}
};
int main(){
A *a=new B();
return 0;
}
Size of void is always 1 byte.
int main()
{
cout<<"Sizeof void:"<<sizeof(void)<<endl;
return 0;
}
output: 1
If a function is inline, the compiler places a copy of the code of that function at each point where the function is called at compile time. One of the important advantages of using an inline function is that it eliminates the function calling overhead of a traditional function.
If we change the value of ref it will be reflected in x. Once a reference variable is initialized it cannot refer to any other variable. We can declare an array of pointers but an array of references is not possible.
int x=10;
int &ref=x; //reference variable
No matter how many objects of that class have been created, there is only one copy of the static member. So same static member can be accessed by all the objects of that class.
A static member function can be called even if no objects of the class exist and the static function are accessed using only the class name and the scope resolution operator ::
Deep copy makes a new and separate copy of an entire object with its unique memory address.Deep copy is comparatively slower.
In shallow copy, an object is created by simply copying the data of all variables of the original object.This works well if none of the variables of the object are defined in the heap section of memory.If some variables are dynamically allocated memory from heap section, then copied object variable will also reference then same memory location.This will create ambiguity and run-time errors dangling pointer.Since both objects will reference to the same memory location, then change made by one will reflect those change in another object as well. Since we wanted to create a replica of the object, this purpose will not be filled by Shallow copy.
STL
Map
How std::map is implemented ?
What are different way to insert the elements in map ?
Design Pattern Singleton Pattern Factory Design Pattern Template Design Pattern Observer Design Pattern Observer code Example - easier one to refer Link Facade Design Pattern Command Pattern Object Pool design pattern Visitor pattern How to choose correct technology for your project ? What is Factory pattern ? It helps you to centralize the object creation process and by doing so it decouples concrete classes from your clients and you know with that you have a better architecture right now. Factory pattern uses inheritance for object creation. What is Abstract pattern ? Abstract pattern sits on the top of factory pattern and uses factory pattern to create objects. Abstract Factory patterns work around a super-factory which creates other factories. This factory is also called as factory of factories. This type of design pattern comes under creational pattern as this pattern provides one of the best ways to create an object. #include <iostream> using namespace std; class Butto...
A good unit test: Is able to be fully automated has full control over all the pieces running(Uses mocks or fakes to achieve this isolation when needed Can run in any order Runs fast Test a single logical concepts in system Is readable is maintainable Runs in memory(no DB or File access ) How to run it ? ccmake localmake /sdev/user/bin/run_ut Mock Objects Each unit test case test a single logical solution That is, test the behavior of a single function in a single scenario But many functions call other functions with their own complex behavior The complex behavior make it hard to test a single logical solution It is also difficult to get the called function to exhibit bad whether behavior So instead of calling the actual calling those other functions we call test doubles One particular kind of test double is mock What is Mock ? A Mock check that the double is called as expected It substitutes a behavior for the function that would be called EXPECT_CALL Mac...
Dynamic casting Example:-
ReplyDelete#include
using namespace std;
class Base
{
public:
virtual void fun()
{
cout<<"A::fun()"<(&b);
if (dptr == NULL)
{
cout<<"NULL";
}
else
{
cout<<"Not NULL"<<endl;
}
return 0;
}