Search

Series List Filter

A Guide to Threading in Python

In Computer Science, a thread is defined as the smallest unit of execution with the independent set of instructions. In simple terms, it is a separate flow of instruction. The advantage of threading is that it allows a user to run different parts of the program in a concurrent manner and make the design of the program simpler.  During threading, different processors run on a single program and each one of them performs an independent task simultaneously. However, if you want to perform multiprocessing, then you need to execute your code in a different language or use the multiprocessing module. In the CPython implementation of Python, interactions are made with the Global Interpreter Lock (GIL) which always limits one Python thread to run at a time. In threading, good candidates are considered those who spend much of their time waiting for external events. These are all true in the case when the code is written in Python. However, in the case of threading in C other than Python, they have the ability to release GIL and run in a concurrent manner.  Basically, building up your program to use threading will help to make the design clearer and easier to reason about. Let us see how to start a thread in Python. How to Start a Thread? The Python Standard Library contains a module named threading which comprises all the basics needed to understand the process of threading better. By this module, you can easily encapsulate threads and provide a clean interface to work with them.  If you want to start a thread, first you need to create a Thread instance and then implement .start(): import logging import threading import time def thread_func(name): logging.info("Thread %s: starting...",name) time.sleep(2) logging.info("Thread %s: finishing...",name) if __name__ == "__main__": format = "%(asctime)s: %(message)s" logging.basicConfig(format=format,level=logging.INFO, datefmt="%H:%M:%S") logging.info("Main    : before creating thread...") t = threading.Thread(target=thread_function,args=(1,)) logging.info("Main    : before running thread...") t.start() logging.info("Main    : wait for the thread to finish...") # t.join() logging.info("Main    : all done...")It is observable that the main section is responsible for creating and initiating the thread: t = threading.Thread(target=thread_function, args=(1,)) t.start()When a Thread is created, a function and a list of arguments to that function are passed. In the example above, thread_function() is being run and 1 is passed as an argument. The function, however, simply logs messages with a time.sleep() in between them.The output of the code above  will be displayed as:$ ./single_thread.py Main    : before creating thread... Main    : before running thread... Thread 1: starting... Main    : wait for the thread to finish... Main    : all done... Thread 1: finishing...The Thread gets finished only after the Main section of the code.Daemon ThreadsIn terms of computer science, a daemon is a computer program that runs as a background process. It is basically a thread that runs in the background without worrying about shutting it down. A daemon thread will shut down immediately when the program terminates. However, if a program is running non-Daemon threads, then the program will wait for those threads to complete before it ends.  In the example code above, you might have noticed that there is a pause of about 2 seconds after the main function has printed the all done message and before the thread is finished. This is because Python waits for the non-daemonic thread to complete. threading.shutdown() goes through all of the running threads and calls .join on every non-daemonic thread. You can understand it better if you look at the source of Python threading.  Let us the example we did before with a daemon thread by adding the daemon=True flag:t = threading.Thread(target=thread_function, args=(1,),daemon=True)Now if you run your program, the output will be as follows: $ ./daemon_thread.py  Main    : before creating thread...  Main    : before running thread...  Thread 1: starting...  Main    : wait for the thread to finish...  Main    : all done... The basic difference here is that the final line of output is missing. This is because when the main function reached the end of code, the daemon was killed.Multiple ThreadingThe process of executing multiple threads in a parallel manner is called multithreading. It enhances the performance of the program and Python multithreading is quite easy to learn.Let us start understanding multithreading using the example we used earlier:import logging import threading import time def thread_func(name): logging.info("Thread %s: starting...", name) time.sleep(2) logging.info("Thread %s: finishing...", name) if __name__ == "__main__": format = "%(asctime)s: %(message)s" logging.basicConfig(format=format,level=logging.INFO, datefmt="%H:%M:%S")     multiple_threads = list() for index in range(3): logging.info("Main    : create and start thread %d...",index) t = threading.Thread(target=thread_function,args=(index,)) threads.append(x) t.start() for index, thread in enumerate(multiple_threads): logging.info("Main    : before joining thread %d...",index) thread.join() logging.info("Main    : thread %d done...",index)This code will work in the same way as it was in the process to start a thread. First, we need to create a Thread object and then call the .start() object. The program then keeps a list of Thread objects. It then waits for them using .join(). If we run this code multiple times, the output will be as below: $ ./multiple_threads.py Main    : create and start thread 0... Thread 0: starting... Main    : create and start thread 1... Thread 1: starting... Main    : create and start thread 2...  Thread 2: starting...  Main    : before joining thread 0...  Thread 2: finishing...  Thread 1: finishing...  Thread 0: finishing...  Main    : thread 0 done...  Main    : before joining thread 1...  Main    : thread 1 done...  Main    : before joining thread 2...  Main    : thread 2 done... The threads are sequenced in the opposite order in this example. This is because multithreading generates different orderings. The Thread x: finishing message informs when each of the thread is done. The thread order is determined by the operating system, so it is essential to know the algorithm design that uses the threading process.  A ThreadPool ExecutorUsing a ThreadpoolExecutor is an easier way to start up a group of threads. It is contained in the Python Standard Library in concurrent.futures. You can create it as a context manager using the help of with statement. It will help in managing and destructing the pool. Example to illustrate a ThreadpoolExecutor (only the main section): import concurrent.futures if __name__ == "__main__":      format = "%(asctime)s: %(message)s"      logging.basicConfig(format=format,level=logging.INFO, datefmt="%H:%M:%S") with concurrent.futures.ThreadPoolExecutor(max_workers=3) asexecutor: executor.map(thread_function,range(3))The code above creates a ThreadpoolExecutor and informs how many worker threads it needs in the pool and then .map() is used to iterate through a list of things. When the with block ends, .join() is used on each of the threads in the pool. It is recommended to use ThreadpoolExecutor whenever possible so that you never forget to .join() the threads.The output of the code will look as follows:$ ./executor.py  Thread 0: starting... Thread 1: starting... Thread 2: starting... Thread 1: finishing... Thread 0: finishing... Thread 2: finishing…Race Conditions When multiple threads try to access a shared piece of data or resource, race conditions occur. Race conditions produce results that are confusing for a user to understand and it occurs rarely and is very difficult to debug.Let us try to understand a race condition using a class with a false database:class FalseDatabase: def race(self): self.value = 0 def update(self,name): logging.info("Thread %s: starting update...",name) local_copy_value = self.value local_copy_value += 1 time.sleep(0.1) self.value = local_copy_value logging.info("Thread %s: finishing update...",name)The class FalseDatabase holds the shared data value on which the race condition will occur. The function race simply intializes .value to zero.  The work of .update() is to analyze a database, perform some computation and then rewrite a value to the database. However, reading from the database means just copying .value to a local variable. Computation means adding a single value and then .sleep() for a little bit and then the value is written back by copying the local value back to .value().The main section of FalseDatabase:if __name__ == "__main__": format = "%(asctime)s: %(message)s" logging.basicConfig(format=format, level=logging.INFO, datefmt="%H:%M:%S") dtb = FalseDatabase() logging.info("Testing update. Starting value is %d...",dtb.value) with concurrent.futures.ThreadPoolExecutor(max_workers=2) as executor: for index in range(2): executor.submit(dtb.update,index) logging.info("Testing update. Ending value is %d...", dtb.value)The programs create a ThreadPoolExecutor with two threads and calls .submit()and then runs database.update()..submit() contains two arguments: both positional and named arguments are passed to the function running in the thread: .submit(function, *args, **kwargs)The output will look like as follows: $ ./racecond.py Testing unlocked update... Starting value is 0... Thread 0: starting update... Thread 1: starting update... Thread 0: finishing update... Thread 1: finishing update... Testing unlocked update... Ending value is 1...One ThreadIn this section, we would be discussing how threads work in a simplified manner.  When the ThreadPoolExecutor is informed to run each thread, we are basically telling it to which function to run and what are the parameters to be passed: executor.submit(database.update, index). This will allow each thread in the pool to call the executor.submit(index). The database is a reference to the FalseDatabase object that was created in main function.Each of the threads will have a reference to the database and also a unique index value which will make the log statements readable. The thread contains its own version of all the data local to the function. This is called local_copy in case of .update(). This is an advantage that allows all the local variables to a function to be thread-safe.Two ThreadsIf we consider the race condition again, the two threads will run concurrently. They will each point to the same object database and will have their own version of local_copy. The database object will be the reason for the problems.  The program will start with Thread 1 running .update() and then the thread will call time.sleep() and allows other threads to take its place and start running. Now Thread 2 performs all the same operations just like Thread 1. It also copies database.value into its local_copy but database.value does not get updated.  Now when Thread 2 ends, the shared database.value still contains zero and both versions of local_copy have the value one. Finally, Thread 1 again wakes up and it terminates by saving its local_copy which gives a chance to Thread 2 to run. On the other hand,  Thread 2 is unaware of Thread 1 and the updated database.value.  Thread 2 also then stores its version of local_copy into database.value.  The race condition occurs here in the sense that Thread 1 and Thread 2 have interleaving access to a single shared object and they overwrite each other’s results. Race condition can also occur when one thread releases memory or closes a file handle before the work of another thread. Basic Synchronization in ThreadingYou can solve race conditions with the help of Lock. A Lock is an object that acts like a hall pass which will allow only one thread at a time to enter the read-modify-write section of the code. If any other thread wants to enter at the same time, it has to wait until the current owner of the Lock gives it up.  The basic functions are .acquire() and .release(). A thread will call my_lock.acquire() to get the Lock. However, this thread will have to wait if the Lock is held by another thread until it releases it. The Lock in Python also works as a context manager and can be used within a with statement and will be released automatically with the exit of with block. Let us take the previous FalseDatabase class and add Lock to it:class FalseDatabase: def race(self): self.value = 0 self._lock = threading.Lock() def locked_update(self, name): logging.info("Thread %s: starting update...",name) logging.debug("Thread %s about to lock...",name) with self._lock: logging.debug("Thread %s has lock...",name) local_copy = self.value local_copy += 1 time.sleep(0.1) self.value = local_copy logging.debug("Thread %s about to release lock...",name) logging.debug("Thread %s after release...",name) logging.info("Thread %s: finishing update...",name)._lock is a part of the threading.Lock() object and is initialized in the unlocked state and later released with the help of with statement. The output of the code above with logging set to warning level will be as follows: $ ./fixingracecondition.py Testing locked update. Starting value is 0. Thread 0: starting update... Thread 1: starting update... Thread 0: finishing update... Thread 1: finishing update... Testing locked update. Ending value is 2.The output of the code with full logging by setting the level to DEBUG:$ ./fixingracecondition.py Testing locked update. Starting value is 0. Thread 0: starting update... Thread 0 about to lock... Thread 0 has lock... Thread 1: starting update... Thread 1 about to lock... Thread 0 about to release lock... Thread 0 after release... Thread 0: finishing update... Thread 1 has lock... Thread 1 about to release lock... Thread 1 after release... Thread 1: finishing update... Testing locked update. Ending value is 2.The Lock provides a mutual exclusion between the threads.The Producer-Consumer Threading ProblemIn Computer Science, the Producer-Consumer Threading Problem is a classic example of a multi-process synchronization problem.  Consider a program that has to read messages and write them to disk. It will listen and accept messages as they coming in bursts and not at regular intervals. This part of the program is termed as the producer.  On the other hand, you need to write the message to the database once you have it. This database access is slow because of bursts of messages coming in. This part of the program is called the consumer.  A pipeline has to be created between the producer and consumer that will act as the changing part as you gather more knowledge about various synchronization objects.  Using LockThe basic design is a producer thread that will read from a false network and put the message into the pipeline: import random Sentinel = object() def producer(pipeline): """Pretend we're getting a message from the network.""" for index in range(10): msg = random.randint(1,101) logging.info("Producer got message: %s",msg) pipeline.set_msg(msg,"Producer") # Send a sentinel message to tell consumer we're done  pipeline.set_msg(SENTINEL,"Producer")The producer gets a random number between 1 and 100 and calls the .set_message() on the pipeline to send it to the consumer: def consumer(pipeline):     """Pretend we're saving a number in the database.""" msg = 0 while msg is not Sentinel: msg = pipeline.get_msg("Consumer") if msg is not Sentinel: logging.info("Consumer storing message: %s",msg)The consumer reads a message from the pipeline and displays the false database.The main section of the section is as follows:if __name__ == "__main__": format = "%(asctime)s: %(message)s" logging.basicConfig(format=format,level=logging.INFO, datefmt="%H:%M:%S") # logging.getLogger().setLevel(logging.DEBUG) pipeline = Pipeline() with concurrent.futures.ThreadPoolExecutor(max_workers=2) as executor: executor.submit(producer, pipeline) executor.submit(consumer, pipeline)Now let us see the code of Pipeline that will pass messages from the producer to consumer: class Pipeline:  """Class to allow a single element pipeline between producer and consumer."""  def pipeline_message(self):  self.msg = 0 self.producer_lock = threading.Lock() self.consumer_lock = threading.Lock() self.consumer_lock.acquire() def get_msg(self, name): logging.debug("%s:about to acquire getlock...",name) self.consumer_lock.acquire() logging.debug("%s:have getlock...",name) msg = self.msg logging.debug("%s:about to release setlock...",name) self.producer_lock.release() logging.debug("%s:setlock released...",name) return msg def set_msg(self, msg, name): logging.debug("%s:about to acquire setlock...",name) self.producer_lock.acquire() logging.debug("%s:have setlock...",name) self.msg=msg logging.debug("%s:about to release getlock...",name) self.consumer_lock.release() logging.debug("%s:getlock released...", name)The members of Pipeline are: .msg - It stores the message to pass..producer_lock - It is a threading.Lock object that does not allow access to the message by the producer..consumer_lock - It is a threading.Lock that does not allow to access the message by the consumer.The function pipeline_message initializes the three members and then calls .acquire() on the .consumer_lock. Now the producer has the allowance to add a message and the consumer has to wait until the message is present.  .get_msg calls .acquire on the consumer_lock and then the consumer copies the value in .msg and then calls .release() on the .producer_lock. After the lock is released, the producer can insert the message into the pipeline. Now the producer will call the .set_msg() and it will acquire the .producer_lock and set the .msg and then the lock is released and the consumer can read the value. The output of the code with the logging set to WARNING: $ ./producerconsumer_lock.py Producer got data 43  Producer got data 45  Consumer storing data: 43  Producer got data 86  Consumer storing data: 45  Producer got data 40  Consumer storing data: 86  Producer got data 62  Consumer storing data: 40  Producer got data 15  Consumer storing data: 62  Producer got data 16  Consumer storing data: 15  Producer got data 61  Consumer storing data: 16  Producer got data 73  Consumer storing data: 61  Producer got data 22  Consumer storing data: 73  Consumer storing data: 22 Objects in Threading Python consists of few more threading modules which can be handy to use in different cases. Some of which are discussed below. Semaphore A semaphore is a counter module with few unique properties. The first property is that its counting is atomic which means that the operating system will not swap the thread while incrementing or decrementing the counter. The internal counter increments when .release() is called and decremented when .acquire() is called.  The other property is that if a thread calls .acquire() while the counter is zero, then the thread will be blocked until another thread calls .release(). The main work of semaphores is to protect a resource having a limited capacity. It is used in cases where you have a pool of connections and you want to limit the size of the pool to a particular number. Timer The Timer module is used to schedule a function that is to be called after a certain amount of time has passed. You need to pass a number of seconds to wait and a function to call to create a Timer:t = threading.Timer(20.0,my_timer_function) The timer is started by calling the .start function and you can stop it by calling  .cancel(). A Timer prompts for action after a particular amount of time.  Summary In this article we have covered most of the topics associated with threading in Python. We have discussed:What is Threading Creating and starting a Thread Multiple threading Race Conditions and how to prevent them Threading Objects We hope you are now well aware of Python threading and how to build threaded programs and the problems they approach to solve. You have also gained knowledge of the problems that arise when writing and debugging different types of threaded programs.  For more information about threading and its uses in the real-world applications, you may refer to the official documentation of Python threading.  To gain more knowledge about Python tips and tricks, check our Python tutorial and get a good hold over coding in Python by joining the Python certification course. 
Rated 4.5/5 based on 44 customer reviews

A Guide to Threading in Python

24380
A Guide to Threading in Python

In Computer Science, a thread is defined as the smallest unit of execution with the independent set of instructions. In simple terms, it is a separate flow of instruction. The advantage of threading is that it allows a user to run different parts of the program in a concurrent manner and make the design of the program simpler.  

During threading, different processors run on a single program and each one of them performs an independent task simultaneously. However, if you want to perform multiprocessing, then you need to execute your code in a different language or use the multiprocessing module. 

In the CPython implementation of Python, interactions are made with the Global Interpreter Lock (GIL) which always limits one Python thread to run at a time. In threading, good candidates are considered those who spend much of their time waiting for external events. These are all true in the case when the code is written in Python. However, in the case of threading in C other than Python, they have the ability to release GIL and run in a concurrent manner.  

Basically, building up your program to use threading will help to make the design clearer and easier to reason about. Let us see how to start a thread in Python. 

How to Start a Thread? 

The Python Standard Library contains a module named threading which comprises all the basics needed to understand the process of threading better. By this module, you can easily encapsulate threads and provide a clean interface to work with them.  

If you want to start a thread, first you need to create a Thread instance and then implement .start()

import logging
import threading
import time

def thread_func(name):
     logging.info("Thread %s: starting...",name)
     time.sleep(2)
     logging.info("Thread %s: finishing...",name)

if __name__ == "__main__":
     format = "%(asctime)s: %(message)s"
     logging.basicConfig(format=format,level=logging.INFO,
                         datefmt="%H:%M:%S")
     logging.info("Main    : before creating thread...")
     t = threading.Thread(target=thread_function,args=(1,))
     logging.info("Main    : before running thread...")
      t.start()
     logging.info("Main    : wait for the thread to finish...")
     # t.join()
     logging.info("Main    : all done...")

It is observable that the main section is responsible for creating and initiating the thread: 

t = threading.Thread(target=thread_function, args=(1,))
t.start()

When a Thread is created, a function and a list of arguments to that function are passed. In the example above, thread_function() is being run and 1 is passed as an argument. The function, however, simply logs messages with a time.sleep() in between them.

The output of the code above  will be displayed as:

$ ./single_thread.py
Main    : before creating thread...
Main    : before running thread...
Thread 1: starting...
Main    : wait for the thread to finish...
Main    : all done...
Thread 1: finishing...

The Thread gets finished only after the Main section of the code.

Daemon Threads

In terms of computer science, a daemon is a computer program that runs as a background process. It is basically a thread that runs in the background without worrying about shutting it down. A daemon thread will shut down immediately when the program terminates. However, if a program is running non-Daemon threads, then the program will wait for those threads to complete before it ends.  

In the example code above, you might have noticed that there is a pause of about 2 seconds after the main function has printed the all done message and before the thread is finished. This is because Python waits for the non-daemonic thread to complete. 

threading.shutdown() goes through all of the running threads and calls .join on every non-daemonic thread. You can understand it better if you look at the source of Python threading.  

Let us the example we did before with a daemon thread by adding the daemon=True flag:

t = threading.Thread(target=thread_function, args=(1,),daemon=True)

Now if you run your program, the output will be as follows: 

$ ./daemon_thread.py 
Main    : before creating thread... 
Main    : before running thread... 
Thread 1: starting... 
Main    : wait for the thread to finish... 
Main    : all done... 

The basic difference here is that the final line of output is missing. This is because when the main function reached the end of code, the daemon was killed.

Multiple Threading

Multiple Threading Process in Python

The process of executing multiple threads in a parallel manner is called multithreading. It enhances the performance of the program and Python multithreading is quite easy to learn.

Let us start understanding multithreading using the example we used earlier:

import logging
import threading
import time

def thread_func(name):
    logging.info("Thread %s: starting...", name)
    time.sleep(2)
    logging.info("Thread %s: finishing...", name)

if __name__ == "__main__":
    format = "%(asctime)s: %(message)s"
    logging.basicConfig(format=format,level=logging.INFO,
                        datefmt="%H:%M:%S")

    multiple_threads = list()
    for index in range(3):
            logging.info("Main    : create and start thread %d...",index)
        t = threading.Thread(target=thread_function,args=(index,))
        threads.append(x)
        t.start()

    for index, thread in enumerate(multiple_threads):
        logging.info("Main    : before joining thread %d...",index)
        thread.join()
        logging.info("Main    : thread %d done...",index)

This code will work in the same way as it was in the process to start a thread. First, we need to create a Thread object and then call the .start() object. The program then keeps a list of Thread objects. It then waits for them using .join(). If we run this code multiple times, the output will be as below: 

$ ./multiple_threads.py
Main    : create and start thread 0...
Thread 0: starting...
Main    : create and start thread 1...
Thread 1: starting...
Main    : create and start thread 2... 
Thread 2: starting... 
Main    : before joining thread 0... 
Thread 2: finishing... 
Thread 1: finishing... 
Thread 0: finishing... 
Main    : thread 0 done... 
Main    : before joining thread 1... 
Main    : thread 1 done... 
Main    : before joining thread 2... 
Main    : thread 2 done... 

The threads are sequenced in the opposite order in this example. This is because multithreading generates different orderings. The Thread x: finishing message informs when each of the thread is done. The thread order is determined by the operating system, so it is essential to know the algorithm design that uses the threading process.  

A ThreadPool Executor

Using a ThreadpoolExecutor is an easier way to start up a group of threads. It is contained in the Python Standard Library in concurrent.futures. You can create it as a context manager using the help of with statement. It will help in managing and destructing the pool. 

Example to illustrate a ThreadpoolExecutor (only the main section): 

import concurrent.futures

if __name__ == "__main__":
     format = "%(asctime)s: %(message)s" 
     logging.basicConfig(format=format,level=logging.INFO,
                         datefmt="%H:%M:%S")
        with concurrent.futures.ThreadPoolExecutor(max_workers=3) asexecutor:
        executor.map(thread_function,range(3))

The code above creates a ThreadpoolExecutor and informs how many worker threads it needs in the pool and then .map() is used to iterate through a list of things. When the with block ends, .join() is used on each of the threads in the pool. It is recommended to use ThreadpoolExecutor whenever possible so that you never forget to .join() the threads.

The output of the code will look as follows:

$ ./executor.py 
Thread 0: starting...
Thread 1: starting...
Thread 2: starting...
Thread 1: finishing...
Thread 0: finishing...
Thread 2: finishing…

Race Conditions 

When multiple threads try to access a shared piece of data or resource, race conditions occur. Race conditions produce results that are confusing for a user to understand and it occurs rarely and is very difficult to debug.

Let us try to understand a race condition using a class with a false database:

class FalseDatabase:
    def race(self):
        self.value = 0

    def update(self,name):
        logging.info("Thread %s: starting update...",name)
        local_copy_value = self.value
        local_copy_value += 1
        time.sleep(0.1)
        self.value = local_copy_value
        logging.info("Thread %s: finishing update...",name)

The class FalseDatabase holds the shared data value on which the race condition will occur. The function race simply intializes .value to zero.  

The work of .update() is to analyze a database, perform some computation and then rewrite a value to the database. However, reading from the database means just copying .value to a local variable. Computation means adding a single value and then .sleep() for a little bit and then the value is written back by copying the local value back to .value().

The main section of FalseDatabase:

if __name__ == "__main__":
    format = "%(asctime)s: %(message)s"
    logging.basicConfig(format=format, level=logging.INFO,
                        datefmt="%H:%M:%S")
    dtb = FalseDatabase()
          logging.info("Testing update. Starting value is %d...",dtb.value)
          with concurrent.futures.ThreadPoolExecutor(max_workers=2) as executor:
          for index in range(2):
              executor.submit(dtb.update,index)
    logging.info("Testing update. Ending value is %d...", dtb.value)

The programs create a ThreadPoolExecutor with two threads and calls .submit()and then runs database.update().

.submit() contains two arguments: both positional and named arguments are passed to the function running in the thread: 

.submit(function, *args, **kwargs)

The output will look like as follows: 

$ ./racecond.py
Testing unlocked update... Starting value is 0...
Thread 0: starting update...
Thread 1: starting update...
Thread 0: finishing update...
Thread 1: finishing update...
Testing unlocked update... Ending value is 1...

One Thread

In this section, we would be discussing how threads work in a simplified manner.  

When the ThreadPoolExecutor is informed to run each thread, we are basically telling it to which function to run and what are the parameters to be passed: executor.submit(database.update, index)This will allow each thread in the pool to call the executor.submit(index). The database is a reference to the FalseDatabase object that was created in main function.

Each of the threads will have a reference to the database and also a unique index value which will make the log statements readable. The thread contains its own version of all the data local to the function. This is called local_copy in case of .update(). This is an advantage that allows all the local variables to a function to be thread-safe.

Two Threads

If we consider the race condition again, the two threads will run concurrently. They will each point to the same object database and will have their own version of local_copy. The database object will be the reason for the problems.  

The program will start with Thread 1 running .update() and then the thread will call time.sleep() and allows other threads to take its place and start running. Now Thread 2 performs all the same operations just like Thread 1. It also copies database.value into its local_copy but database.value does not get updated.  

Now when Thread 2 ends, the shared database.value still contains zero and both versions of local_copy have the value one. Finally, Thread 1 again wakes up and it terminates by saving its local_copy which gives a chance to Thread 2 to run. On the other hand,  Thread 2 is unaware of Thread 1 and the updated database.value.  Thread 2 also then stores its version of local_copy into database.value.  

The race condition occurs here in the sense that Thread 1 and Thread 2 have interleaving access to a single shared object and they overwrite each other’s results. Race condition can also occur when one thread releases memory or closes a file handle before the work of another thread. 

Basic Synchronization in Threading

You can solve race conditions with the help of Lock. A Lock is an object that acts like a hall pass which will allow only one thread at a time to enter the read-modify-write section of the code. If any other thread wants to enter at the same time, it has to wait until the current owner of the Lock gives it up.  

The basic functions are .acquire() and .release(). A thread will call my_lock.acquire() to get the Lock. However, this thread will have to wait if the Lock is held by another thread until it releases it. 

The Lock in Python also works as a context manager and can be used within a with statement and will be released automatically with the exit of with block. Let us take the previous FalseDatabase class and add Lock to it:

class FalseDatabase:
    def race(self):
        self.value = 0
        self._lock = threading.Lock()

    def locked_update(self, name):
        logging.info("Thread %s: starting update...",name)
        logging.debug("Thread %s about to lock...",name)
        with self._lock:
            logging.debug("Thread %s has lock...",name)
            local_copy = self.value
            local_copy += 1
            time.sleep(0.1)
            self.value = local_copy
            logging.debug("Thread %s about to release lock...",name)
       logging.debug("Thread %s after release...",name)
       logging.info("Thread %s: finishing update...",name)

._lock is a part of the threading.Lock() object and is initialized in the unlocked state and later released with the help of with statement. 

The output of the code above with logging set to warning level will be as follows: 

$ ./fixingracecondition.py
Testing locked update. Starting value is 0.
Thread 0: starting update...
Thread 1: starting update...
Thread 0: finishing update...
Thread 1: finishing update...
Testing locked update. Ending value is 2.

The output of the code with full logging by setting the level to DEBUG:

$ ./fixingracecondition.py
Testing locked update. Starting value is 0.
Thread 0: starting update...
Thread 0 about to lock...
Thread 0 has lock...
Thread 1: starting update...
Thread 1 about to lock...
Thread 0 about to release lock...
Thread 0 after release...
Thread 0: finishing update...
Thread 1 has lock...
Thread 1 about to release lock...
Thread 1 after release...
Thread 1: finishing update...
Testing locked update. Ending value is 2.

The Lock provides a mutual exclusion between the threads.

The Producer-Consumer Threading Problem

In Computer Science, the Producer-Consumer Threading Problem is a classic example of a multi-process synchronization problem.  

Consider a program that has to read messages and write them to disk. It will listen and accept messages as they coming in bursts and not at regular intervals. This part of the program is termed as the producer.  

On the other hand, you need to write the message to the database once you have it. This database access is slow because of bursts of messages coming in. This part of the program is called the consumer.  

A pipeline has to be created between the producer and consumer that will act as the changing part as you gather more knowledge about various synchronization objects.  

Using Lock

The basic design is a producer thread that will read from a false network and put the message into the pipeline

import random
Sentinel = object()

def producer(pipeline):
    """Pretend we're getting a message from the network."""
    for index in range(10):
        msg = random.randint(1,101)
        logging.info("Producer got message: %s",msg)
        pipeline.set_msg(msg,"Producer")

    # Send a sentinel message to tell consumer we're done 
    pipeline.set_msg(SENTINEL,"Producer")

The producer gets a random number between 1 and 100 and calls the .set_message() on the pipeline to send it to the consumer

def consumer(pipeline):
    """Pretend we're saving a number in the database."""
    msg = 0
    while msg is not Sentinel:
       msg = pipeline.get_msg("Consumer")
       if msg is not Sentinel:
           logging.info("Consumer storing message: %s",msg)

The consumer reads a message from the pipeline and displays the false database.

The main section of the section is as follows:

if __name__ == "__main__":
    format = "%(asctime)s: %(message)s"
    logging.basicConfig(format=format,level=logging.INFO,
                        datefmt="%H:%M:%S")
    # logging.getLogger().setLevel(logging.DEBUG)

    pipeline = Pipeline()
         with concurrent.futures.ThreadPoolExecutor(max_workers=2) as executor:
         executor.submit(producer, pipeline)
         executor.submit(consumer, pipeline)

Now let us see the code of Pipeline that will pass messages from the producer to consumer

class Pipeline:
    """Class to allow a single element pipeline between producer and consumer.""" 
   def pipeline_message(self): 
     self.msg = 0
     self.producer_lock = threading.Lock()
     self.consumer_lock = threading.Lock()
     self.consumer_lock.acquire()

  def get_msg(self, name):
      logging.debug("%s:about to acquire getlock...",name)
      self.consumer_lock.acquire()
      logging.debug("%s:have getlock...",name)
      msg = self.msg
      logging.debug("%s:about to release setlock...",name)
      self.producer_lock.release()
      logging.debug("%s:setlock released...",name)
      return msg

  def set_msg(self, msg, name):
      logging.debug("%s:about to acquire setlock...",name)
      self.producer_lock.acquire()
      logging.debug("%s:have setlock...",name)
      self.msg=msg
      logging.debug("%s:about to release getlock...",name)
      self.consumer_lock.release()
      logging.debug("%s:getlock released...", name)

The members of Pipeline are: 

  • .msg - It stores the message to pass.
  • .producer_lock - It is a threading.Lock object that does not allow access to the message by the producer.
  • .consumer_lock - It is a threading.Lock that does not allow to access the message by the consumer.

The function pipeline_message initializes the three members and then calls .acquire() on the .consumer_lock. Now the producer has the allowance to add a message and the consumer has to wait until the message is present.  

.get_msg calls .acquire on the consumer_lock and then the consumer copies the value in .msg and then calls .release() on the .producer_lock. After the lock is released, the producer can insert the message into the pipeline. Now the producer will call the .set_msg() and it will acquire the .producer_lock and set the .msg and then the lock is released and the consumer can read the value. 

The output of the code with the logging set to WARNING

$ ./producerconsumer_lock.py
Producer got data 43 
Producer got data 45 
Consumer storing data: 43 
Producer got data 86 
Consumer storing data: 45 
Producer got data 40 
Consumer storing data: 86 
Producer got data 62 
Consumer storing data: 40 
Producer got data 15 
Consumer storing data: 62 
Producer got data 16 
Consumer storing data: 15 
Producer got data 61 
Consumer storing data: 16 
Producer got data 73 
Consumer storing data: 61 
Producer got data 22 
Consumer storing data: 73 
Consumer storing data: 22 

Objects in Threading 

Python consists of few more threading modules which can be handy to use in different cases. Some of which are discussed below. 

Semaphore 

A semaphore is a counter module with few unique properties. The first property is that its counting is atomic which means that the operating system will not swap the thread while incrementing or decrementing the counter. The internal counter increments when .release() is called and decremented when .acquire() is called.  

The other property is that if a thread calls .acquire() while the counter is zero, then the thread will be blocked until another thread calls .release()

The main work of semaphores is to protect a resource having a limited capacity. It is used in cases where you have a pool of connections and you want to limit the size of the pool to a particular number. 

Timer 

The Timer module is used to schedule a function that is to be called after a certain amount of time has passed. You need to pass a number of seconds to wait and a function to call to create a Timer:

t = threading.Timer(20.0,my_timer_function) 

The timer is started by calling the .start function and you can stop it by calling  .cancel(). A Timer prompts for action after a particular amount of time.  

Summary 

In this article we have covered most of the topics associated with threading in Python. We have discussed:

  • What is Threading 
  • Creating and starting a Thread 
  • Multiple threading 
  • Race Conditions and how to prevent them 
  • Threading Objects 

We hope you are now well aware of Python threading and how to build threaded programs and the problems they approach to solve. You have also gained knowledge of the problems that arise when writing and debugging different types of threaded programs.  

For more information about threading and its uses in the real-world applications, you may refer to the official documentation of Python threading.  To gain more knowledge about Python tips and tricks, check our Python tutorial and get a good hold over coding in Python by joining the Python certification course

Priyankur

Priyankur Sarkar

Data Science Enthusiast

Priyankur Sarkar loves to play with data and get insightful results out of it, then turn those data insights and results in business growth. He is an electronics engineer with a versatile experience as an individual contributor and leading teams, and has actively worked towards building Machine Learning capabilities for organizations.

Join the Discussion

Your email address will not be published. Required fields are marked *

Suggested Blogs

Scala Vs Kotlin

Ever-changing requirements in coding have always been happening, ones that cause programmers to change their minds about using the appropriate programming language and tools to code. Java has been there for a long time, a really long time, 24 years ago. It is relatively easy to use, write, compile, debug, and learn than other programming languages. However, its certain inhibitions like slow performance, unavailability of any support for low-level programming, possessing poor features in GUI 4, and having no control over garbage collection is putting Java developers in a dilemma on choosing an alternative to Java, such as JetBrains’ programming language, Kotlin, presently an officially supported language for Android development or Scala, an all-purpose programming language supporting functional programming and a strong static type system. Today, we will discuss how developers can decide to choose Scala or Kotlin as an alternative to Java. We will briefly talk about Scala and Kotlin separately and talk about their application before moving forward to looking at the differences, advantages, and disadvantages of both and finally have you decide which one of these two suits your requirements. User’s requirement Before we begin, here is a question for the readers, ‘What are you looking for in the next programming language that you will use?’ It is an obvious question because the programming purposes drive the actual basis and need of developing a language. Do you need a language that strives to better Java or use a language that lets you do things that aren’t possible in Java? If it is the first reason, then Scala might be the best one for you, otherwise, it is a simplified programming language like Kotlin. Now let us first briefly discuss Scala and Kotlin individually. ScalaDeveloped by Martin Odersky, the first version of Scala was launched in the year 2003 and is a classic example of a  general-purpose, object-oriented computer language, offering a wide range of functional programming language features and a strong static type system. Inspired from Java itself, Scala, as the name suggests, is highly scalable and this very feature sets Scala apart from other programming languages. When we say that Scala is inspired from Java, that means developers can code Scala in the same way they do for Java. Additionally, Scala makes it possible to use numerous Java and libraries within itself as well. It is designed to be able to use an elegant, concise and type-safe method to express common programming patterns. Scala is a very popular programming language amongst developers and rising up its ranks in the world of technology. Although Scala comes with a number of plus points, there are some which make it a bit ineffective. Here are the strengths and weaknesses of Scala. Strengths: Full Support for Pattern Matching, Macros, and Higher-Kinded Types Has a very flexible code syntax Gets a bigger Community Support Enables overloading operators Weaknesses: Slow in compilation Challenging Binary Compilation Not so proficient in the Management of Null SafetyKotlin Developed by JetBrains, Kotlin was released on February 2012 as an open-source language. Until now, there have been two released versions with the latest one being Kotlin 1.2, the most stable version that was released on November 28, 2017. Since Kotlin is extremely compatible with Java 6 the latest version of Java on Android, it has gained critical acclaim on Android worldwide and additionally, it offers various key features that are prepared only for Java 8 and not even Java 6 developers have access to that. Kotlin provides seamless and flawless interoperability with Java. That means, developers can easily call Java codes from Kotlin and same goes the other way around. The built-in null safety feature avoids showing the NullPointerException (NPE) that makes developing android apps easy and joyful, something every android programmer wants. Below mentioned are the key pointers on the strengths and weaknesses of Kotlin. Strengths Takes a Functional Programming Approach and Object-Oriented Programming style(OOP) Style  Has Higher-Order Functions Short, Neat, and Verbose-Free Expression  Supported by JetBrains and Google. Weaknesses: More limited Pattern Matching Additional Runtime Size Initial Readability of Code Shortage of Official Support Smaller Support Community. Ease of learning: Scala vs Kotlin Scala is a powerful programming language packed with superior features and possesses a flexible syntax. It is not an easy language to learn and is a nightmare for newcomers. Kotlin, on the other hand, has been reported to have been an easy-to-learn language for many Java developers as getting started with Kotlin is relatively easy and so is writing codes. Even though it is a comparatively easier language to learn and code with, Kotlin lacks the solid set of features that is common in Scala. It might take less time to learn a programming language, but the most important thing to look for is a comprehensive array of features. Scala, even though a very difficult language to learn, is cherished by the developers as it lets them do things that cannot be done in Kotlin Here are the major differences between Scala and Kotlin: ScalaKotlinType inferenceEfficientImmutabilityExtension FunctionsSingleton objectMassive InteroperabilityConcurrency controlLessens Crashes at RuntimeString interpolationSmart Cast FunctionHigher-order functionSafe and ReliableCase classes and Pattern matching Lazy computationLow adoption costRich collection setMaking the appropriate choice of languageNow, whether you may like a programming language or not, if that very language helps you get the best out of your job, then you will have to live with it. These are the facts about getting the best results. The outcome is the main factor in you deciding the appropriate language for your job. Kotlin is the only option for Android development as Android doesn’t use JVM, so any old JVM-compatible language will not work in Android. Kotlin has it all what it takes to compile, debug, and run the software on Android because of which it is in-built into Android Studio. However, Kotlin is not so usable outside Android development. If you are one of the developers who like working with Eclipse for your IDE, then Scala IDE is better than the Kotlin Plugin even if you can make Eclipse work with both the languages with limitations. Scala IDE is more advanced than the Kotlin plugin and is easier to set up. Some developers found it quite difficult to make the Kotlin plugin work. This case is quite the same with NetBeans. Kotlin is still getting there but is already popular amongst Java developers as it offers an easier transition than Scala. Kotlin is still maturing, but many Java people find adopting it is an easier transition than Scala is.  Scala, however, is for developers who are focused more on discovering new ideas while Kotlin is for those who want to get results. Kotlin stresses fast compilation but is more restrictive while Scala gives a lot of flexibility. Go for Scala if you breathe functional programming! It has more appropriate features for this type of programming than Kotlin does. Scala supports currying and partial application, the methods of breaking down functions requiring multiple arguments offering more flexibility. Go for the one that is the most appropriate one for your work, style of working and what you are aiming at. Think before you leap. The Outcome At the end of the day, all that matters is what you want to use the language for. While Scala goes well for the projects that require a combination of functional, OOP style programming languages, and where programmers need to handle lots of data or complex modelling, Kotlin becomes the best choice when you want something less frustrating than Java while developing apps because using Kotlin makes app development less cumbersome and a great thing to work on. It is just like a better-looking version of Java with less lengthy codes. 
Rated 4.5/5 based on 19 customer reviews
7590
Scala Vs Kotlin

Ever-changing requirements in coding have always b... Read More

Xcode vs Swift

Xcode and Swift are two different products developed by Apple for macOS, iOS, iPadOS, watchOS, and tvOS. While Xcode is an integrated development environment (IDE) for macOS containing a suite of software development tools to develop software for macOS, iOS, iPadOS, watchOS, and tvOS, Swift is a general-purpose, multi-paradigm, compiled programming language developed iOS, macOS, watchOS, tvOS, Linux, and z/OS. So it is clear that they can not be compared with each other. On the contrary, Swift is compatible with Xcode as Swift v 5.1, the default version of Swift is included in Xcode v 11. In this article, we will go through what Xcode and Swift are in general and cover their features strengths and weaknesses followed by how Swift is compatible with Xcode. XcodeIt was first released in 2003 as version 1 with the latest stable one being version 10.2.1 released on 17 April 2019. It can be downloaded from the Mac App Store and is free to use for macOS Mojave users. Registered developers may download the preview releases and previous versions of the suite using via the Apple Developer website.  Overview of the major featuresSupport: Programming languages such as C, C++, Objective-C, Objective-C++, Java, AppleScript, Python, Ruby, ResEdit (Rez), and Swift are supported by Xcode with source code along with support for a variety of programming models including Cocoa, Carbo, and Java. Not only that, there is additional support via third parties for GNU Pascal, Free Pascal, Ada, C#, Perl, and D Capability: Xcode can build fat binary files that include the code for various architectures in the Mach-O executable format. Known as universal binary files, these allow the application to run on both PowerPC and Intel-based (x86) platforms including both 32-bit and 64-bit codes Compiling and debugging: Xcode uses the iOS SDK to compile and debug applications for iOS that run on ARM architecture processors GUI tool: Xcode comprises of the GUI tool, Instruments that runs dynamic tracing framework on the top of DTrace, a dynamic tracing framework designed by Sun Microsystems and released as a part of OpenSolaris. Advantages and disadvantages of Xcode: Xcode is designed by Apple and will only work with Apple operating systems: macOS, iOS, iPadOS, watchOS, and tvOS. Since its release in 2003, Xcode has made significant improvements and the latest version, Xcode 10.2.1 has all the features that are needed to perform continuous integration. Let us have a look at the pros of using Xcode: Equipped with a well designed and easy to use UI creator Excellent for code completion Using Xcode, a developer can learn profiling and heap analysis in a natural way Xcode’s simulator lets you easily test your app while you build it in an environment that simulates your iPhone The app store has a wide range of audience who are willing to pay for apps. Now, the cons: Clunky and outdated Objective C makes it more frustrating if you are habituated to use a modern language No support for tabbed work environments makes it difficult to work with multiple windows Hardly any information can be found online to solve problems due to a previous Apple NDA on Xcode development It is a complicated process to export your app onto a device Will only work with Apple operating systems The App Store approval process can be annoyingly lengthy.SwiftSwift was launched at Apple's 2014 Worldwide Developers Conference as a general-purpose, multi-paradigm, compiled programming language for iOS, macOS, watchOS, tvOS, Linux, and z/OS Being a new entry these operating systems, Swift accelerates on the best parts of C and Objective C without being held back by its compatibility. It utilises safe patterns for programming, adding more features to it, thus making programming easier and more flexible. By developing their existing debugger, compiler and framework infrastructure, it took quite some time to create the base for Swift. Furthermore, Automatic Reference Counting was used to simplify the memory management part. The framework stack which was once built upon a solid framework of Cocoa and Foundation has undergone significant changes and is now completely regulated and refurbished. Developers who have worked with Objective-C do find Swift quite similar. Objective-C’s dynamic object model and its comprehensively named parameters provide a lot of control to Swift.  Developers can use Swift to have access to the existing Cocoa framework in addition to the mix and match interoperability with an objective C code. Swift uses this common rule to offer multiple new features in combination with object-oriented and procedural portions of the language. The idea is to create the best possible language for a wide range of uses, varying from desktop and mobile apps, systems programming, and scaling up to cloud services. The designing of Swift was done to make sure that developers find it easy to maintain and write correct programs. Coding done in Xcode is safe, fast and expressive. Swift offers a host of features that give developers the control needed to make the code easy to read and write. Furthermore, Apple made Swift to be easily understandable to help developers avoid making mistakes while coding and make the code look organised, along with the modules that give namespaces and eliminate headers. Since Swift uses some features present in other languages, one of them being named parameters written with clean syntax that makes the APIs much easier to maintain and read. Here are some of the additional features of Swift: Multiple return values and Tuples Generics Short and quick iterations over a collection or range Structs that support extensions, methods and protocols Functional programming patterns Advanced control flow Powerful error handling. These features are systematically designed to make them work together resulting in creating a powerful but fun-to-use language. Advantages and disadvantages of Swift: Pros of using the Swift Programming language: Easy to read and maintain: The Swift program codes are based on natural English as it has borrowed syntaxes from other programming languages. This makes the language more expressive Scalable: Users can add more features to Swift, making it a scalable programming language. In the future, Swift is what Apple is relying on and not Objective C Concise: Swift does not include long lines of code and that favours the developers who want a concise syntax, thus increasing the development and testing rate of the program Safety and improved performance: It is almost 40% better than the Objective-C when speed and performance are taken into consideration as it is easy to tackle the bugs which lead to safer programming Cross-device support: This language is capable of handling a wide range of Apple platforms such as iOS, iOS X, macOS, tvOS, and watchOS. Automatic Memory Management: This feature present in Swift prevents memory leaks and helps in optimizing the application’s performance that is done by using Automatic Reference Counting. Cons of Swift: Compatibility issues: The updated versions Swift is found to a bit unstable with the newer versions of Apple leading to a few issues. Switching to a newer version of Swift is the fix but that is costly Speed Issues: This is relevant to the earlier versions of the Swift programming language Less in number: The number of Swift developers is limited as Swift is a new programming language Delay in uploading apps: Developers will be facing delays over their apps written in Swift to be uploaded to the App Store only after iOS 8 and Xcode 6 are released. The estimated time for release is reported to be September-October, 2014. Conclusion So as we discussed both Xcode and Swift, it is clear that they cannot be compared to each other. In fact, they both complement each other to deliver impressive results without any headaches. Apple relies on both quite a lot and it is certain to have Swift and Xcode the perfect combination of a robust application and a user-friendly programming language.
Rated 4.5/5 based on 11 customer reviews
8588
Xcode vs Swift

Xcode and Swift are two different products develop... Read More

ASP.NET VS PHP

ASP.NET and PHP are pretty popular languages in the programming world used by a huge number of developers and this makes it difficult for the new developers to choose either one of them. The comparison between these two has been in debate in recent times. Both of these languages are used in large web-based applications. Some successful companies like Google, Facebook, and Twitter, etc, also use these languages. In this article, we will understand the differences between PHP and ASP.Net also, will discuss which is better ASP.NET or PHP.Before we learn more about the differences between the two languages, we must first understand some basics of the two technologies:PHPPHP stands for Hypertext Preprocessor. It is an open-source programming language that is used for web development and can be embedded into HTML. The best part of PMP is that it’s free and possesses a  ton of frameworks which simplifies web development and also great for beginners since it allows simple and easy coding techniques. PHP is great for professionals as well because of its advanced features.Why use a PHP framework?A PHP framework provides a basic structure for streamlining the development of web apps. The applications and websites built using PHP frameworks will help the businesses to improve their performance needs.The best PHP frameworks available:LaravelCodeIgniterSymfonyZendPhalconCakePHPYiiFuelPHPPros and Cons of PHP frameworkPros:Rapid Development                                              Centralized DatabaseStronger TeamworkMakes your application more secure               Cons:Slower ExecutionPHP is unsecuredPoor error handling methodLimited Visibility and ControlDemand for PHP Developer:In today’s web development market, most of the websites are developed using PHP development tools which indicates a huge demand for PHP developers. If you are looking to make an entry to the IT world as a developer, then PHP programming will be an easy entry point.Taking up a PHP training from an authentic and reliable training provider will be a great platform to hone your skills.ASP.NETASP.NET is an open-source server-side web development tool developed by Microsoft for easy building of web applications and web pages. It can be written using any .Net supported language which makes it more popular among .NET developers. High speed and low cost are the main reasons to use it. Websites built ASP.NET is faster and more efficient than a website built with PHP.Pros and Cons of ASP.NET frameworkPros:Less coding timeWorld class toolboxConsistencyCustomizability and ExtensibilityCons:Limited Object-Relational (OR) supportBit expensiveSlower than Native CodeDemand for ASP.NET Developer:If you are a .NET developer, you will find yourself demanded by several asp.net development companies as your programming skills are extremely valuable in today’s market. There are many companies hunting for developers who can do programming with .NET. Therefore, it is advisable that you brush up your skills with ASP.NET Certification Training which will increase your value many times and have an edge over others. The ASP.NET Certification Training program will definitely make your future bright and offer you heaps of career opportunities. Whether you are a fresher or a working professional, you can take up the certification course.Comparison Between ASP.NET and PHPBoth ASP.NET and PHP frameworks are effective frameworks to work with, however, one may have few advantages over the other. Let’s dive deeper and compare these frameworks to understand which one is better than the other.1. Market Share:According to the report, BuiltWith data source PHP is the most used programming language which has 73% of market share, ASP.NET has 23% of market share. PHP also has a market share of 58% in top 100K websites and market share of PHP in 10K websites is 52%.Statistics for websites using Programming Language technologies:2. WebsitesHere are two lists to compare ASP.NET vs PHP websites:Websites built using PHPWebsites built using ASP.NETWikipediaFacebookYahooWordPress.comiStockPhotoMicrosoftDellGoDaddy3. Inbuilt featuresPHP has many unique in-built features that can help web developers. On the other hand, ASP.NET doesn’t have any such features.4. Speed and PerformanceWhen you compare PHP vs. ASP.NET for speed, PHP will be the winner. ASP.NET is a bit slow compared to PHP as it is built on the COM-based system whereas, PHP program runs on its own memory space.5. Community SupportCompared to ASP.NET, learning support is great in the PHP framework and has a large support community. It will be difficult for you to get hold of #C language of ASP.NET as it is difficult to understand.Key differences between ASP.NET vs PHPPHPASP.NETPHP was launched by Rasmus Lerdorf in the year 1995.ASP.NET was launched by Microsoft in the year 2002.PHP is a scripting languageASP.NET is a paid Microsoft provided web application framework.PHP suits for small sized organizationsASP.NET suits for a large and medium-sized organization.PHP has a decent market share in the  marketASP.NET has a higher market sharePHP works slow for desktop applicationsASP.NET is well equipped to assist and create desktop applications.PHP suits best for applications that contain a prime focus on UIASP.NET suits better for applications where the key concern is security.Easy to learnQuite challenging to learn.Coding using PHP is easy when compared to all other languagesCoding with ASP.NET is complicatedPHP execution is faster since it uses in-built memory spaceCoding with ASP.NET is complicatedPHP can run in Linux Operating System which is available for freeASP.NET requires a Windows platform which is not freeConclusionBoth PHP and ASP.NET come with their pros and cons. PHP is secure, fast, reliable, and inexpensive and ASP.NET is easier to use and maintain because of its class library system. Since both programming languages are similar and accomplish the same results so the company can make a choice based on the needs and requirements of the app they are about to develop.
Rated 4.5/5 based on 1 customer reviews
7708
ASP.NET VS PHP

ASP.NET and PHP are pretty popular languages in th... Read More

20% Discount