您的位置:新葡亰496net > 奥门新萄京娱乐场 > 多线程与多进程,进程和线程

多线程与多进程,进程和线程

发布时间:2019-06-20 12:41编辑:奥门新萄京娱乐场浏览(53)

    1.线程进度
    经过:程序并不可能独立运转,唯有将次第装载到内部存储器中,系统为它分配财富本领运作,而这种实行的程序就叫做进程,不负有实践感念,只是程序各样能源聚合

    python 3.x 学习笔记15(多线程),python3.x

    1.线程历程
    进度:程序并不可能独立运转,只有将顺序装载到内部存款和储蓄器中,系统为它分配能源手艺运转,而这种施行的顺序就称为进度,不享有执行感念,只是程序种种财富汇集

    线程:线程是操作系统能够实行演算调解的小小单位。它被含有在经过之中,是经过中的实际运营单位。一条线程指的是进度中贰个单一顺序的调控流,三个历程中得以并发几个线程,每条线程并行试行差别的任务

    2.线程与经过的区分

    线程共享内部存款和储蓄器空间,                                                                                               进度的内存是独立的

    线程共享创立它的进度的地址空间;                                                                        进度具备和睦的地址空间。

    线程能够直接访问其经过的数据段;                                                                        进程具有父进度的数据段的本身的别本。

    线程能够直接与其进程的其余线程通讯;                                                                 进度必须运用进程间通信与手足进度张开通信。

    新线程很轻巧创立;                                                                                                  新流程须要再行父流程。

    线程能够对一样进度的线程进行优良程度的支配;                                                   进度只好对子进程展开支配。

    对主线程的转移(裁撤,优先级改变等)可能会影响进度其余线程的一言一动;            对父进度的退换不会影响子进度。

     

    3.一条线程至少有一条线程

    4.线程锁
        每种线程在要修改公共数据时,为了制止本人在还没改完的时候外人也来修改此数量,能够给那一个数额加一把锁, 那样任何线程想修改此数据时就必须等待你改改完成并把锁释放掉后技艺再拜访此数额

     

    5.Semaphore(信号量)

        互斥锁 同一时间只允许叁个线程改造数据,而Semaphore是还要同意一定数量的线程退换数据 ,比方厕全体3个坑,那最多只同意3个人上厕所,前边的人只好等内部有人出来了才具再进来。

     

    6.join的成效是 等待线程试行达成

     

    7.练习

    信号量

    __author__ = "Narwhale"
    
    import threading,time
    
    def run(n):
        semaphore.acquire()
        time.sleep(1)
        print('线程%s在跑!'%n)
        semaphore.release()
    
    if __name__ == '__main__':
        semaphore = threading.BoundedSemaphore(5)      #最多5个线程同时跑
        for i in range(20):
            t = threading.Thread(target=run,args=(i,))
            t.start()
    
    while threading.active_count() !=1:
        pass
    else:
        print('所有线程跑完了!')
    

    生产者消费者模型

    __author__ = "Narwhale"
    import queue,time,threading
    q = queue.Queue(10)
    
    def producer(name):
        count = 0
        while True:
            print('%s生产了包子%s'%(name,count))
            q.put('包子%s'%count)
            count  = 1
            time.sleep(1)
    
    def consumer(name):
        while True:
            print('%s取走了%s,并且吃了它。。。。。'%(name,q.get()))
            time.sleep(1)
    
    
    A1 = threading.Thread(target=producer,args=('A1',))
    A1.start()
    
    B1 = threading.Thread(target=consumer,args=('B1',))
    B1.start()
    B2 = threading.Thread(target=consumer,args=('B2',))
    B2.start()
    

    红绿灯

    __author__ = "Narwhale"
    
    import threading,time
    
    event = threading.Event()
    
    def light():
        event.set()
        count = 0
        while True:
            if count >5 and count < 10:
                event.clear()
                print('33[41;1m红灯亮了33[0m' )
            elif count > 10:
                event.set()
                count = 0
            else:
                print('33[42;1m绿灯亮了33[0m')
            time.sleep(1)
            count  =1
    
    
    def car(n):
        while True:
            if event.isSet():
                print('33[34;1m%s车正在跑!33[0m'%n)
                time.sleep(1)
            else:
                print('车停下来了')
                event.wait()
    
    light = threading.Thread(target=light,args=( ))
    light.start()
    car1 = threading.Thread(target=car,args=('Tesla',))
    car1.start()
    

     

    3.x 学习笔记15(四线程),python3.x 1.线程进度进度:程序并无法独立运转,唯有将顺序装载到内部存储器中,系统为它分配能源才能运作,而那...

    线程参照文书档案

        线程:

    一,进程与线程

    1.什么样是线程
    线程是操作系统能够进行演算调解的微小单位。它被含有在过程之中,是经过中的实际运作单位。一条线程指的是进程中一个纯粹顺序的调节流,二个进度中得以并发两个线程,每条线程并行推行不一致的任务
    二个线程是一个进行上下文,那是多个CPU的持有音讯必要实施一多元的一声令下。
    借让你正在读一本书,你今后想休息一下,但是你指望能够回来,复苏从您打住的职责。落成那或多或少的章程之一是通过草草记下页码、行号和数据。所以您读一本书的实行上下文是那多少个数字。
    要是您有几个室友,她使用同一的本领,她能够把书当你不应用它,并连续读书,她停了下来。然后你就可以把它拿回来,复苏你在哪儿。
    线程在同一的方法专门的学业。CPU是给您的错觉同时做三个总括。它通过花一点岁月在种种总计。它能够如此做,因为它有一个为每一种计算实施上下文。就如你能够与您的爱侣分享一本书,许多职分能够共享CPU。
    越多的技巧水平,一个进行上下文(由此多个线程)由CPU的寄存器的值。
    最终:线程分歧于流程。实行线程的上下文,而经过是一批能源与总结有关。八个进程能够有一个或三个线程。
    弄清:与流程相关的财富包蕴内部存款和储蓄器页(三个历程中的全体线程都有平等的视图的内部存储器),文件讲述符(如。、打开的套接字)和安全凭据(如。,用户的ID起初那几个进度)。

    2.什么是经过
    多个实践顺序被叫做进程的实例。
    各种进度提供了所需的能源来施行贰个顺序。进度的虚拟地址空间,可举行代码,展开系统管理目的,一个有惊无险上下文,三个异样的长河标记符,境况变量,优先类,最小和最大事业集大小和至少多少个线程的实行。各样流程先导一个线程,经常被称为主要的线程,但从它的别样线程能够创立额外的线程。

    3.历程与线程的界别

    1. 线程共享创设它的长河的地址空间,进程有温馨的位置空间。
    2. 线程直接待上访问的数据段进度;进程有友好的复制父进程的数据段。
    3. 线程能够一贯与别的线程的通讯进程,进度必须选用进度间通讯和亲生交换进度。
    4. 新创设的线程很轻易;新工艺必要复制父进度。
    5. 线程能够训练一点都极大的垄断(monopoly)线程一样的长河;流程只好调控子进程。
    6. 主线程改动(打消、优先级变化等)恐怕会影响进度的别的线程的作为;父进度的变动不会潜移默化子进

    4.Python GIL(Global Interpreter Lock)
    全局解释器锁在CPython的,或GIL,是一个互斥锁,防止多少个地面线程施行Python字节码。那把锁是少不了的,首要是因为CPython的内部存款和储蓄器管理不是线程安全的。(不过,由于GIL存在,其余作用已经习于旧贯于依据保险实行)。
    第一要求鲜明的一些是GIL并不是Python的特色,它是在促成Python解析器(CPython)时所引进的贰个定义。就好比C 是一套语言(语法)规范,不过足以用分歧的编译器来编写翻译成可进行代码。盛名的编写翻译器举个例子GCC,INTEL C ,Visual C 等。Python也同样,一样一段代码能够通过CPython,PyPy,Psyco等不等的Python施行情形来施行。像在那之中的JPython就从未有过GIL。但是因为CPython是绝大好多条件下暗许的Python试行遭受。所以在很几人的定义里CPython正是Python,也就想当然的把GIL归咎为Python语言的通病。所以这里要先明了一点:GIL并不是Python的特点,Python完全能够不借助于于GIL
    参谋文书档案:**

    线程:线程是操作系统可以进行演算调整的小小单位。它被含有在进程之中,是经过中的实际运作单位。一条线程指的是进程中一个单纯顺序的调控流,三个进度中得以并发多个线程,每条线程并行实施差别的任务

    线程是操作系统能够进行演算调解的小不点儿单位,它被含有在经过中,是经过中的实际运营单位

        什么是线程?

    二、多线程

    多线程类似于同不平时候实践多少个例外程序,十六线程运转有如下优点:

    1. 行使线程能够把并吞长日子的先后中的职务放到后台去管理。

    2. 用户分界面能够更进一步吸引人,那样比方用户点击了三个开关去接触有些事件的管理,能够弹出二个进度条来彰显管理的快慢

    3. 程序的周转速度只怕加快
    4. 在一些等待的职责完毕上如用户输入、文件读写和互联网收发数据等,线程就相比较有用了。在这种气象下大家能够释放部分珍奇的财富如内部存款和储蓄器占用等等。
    5. 线程在推行进程中与经过如故有分其余。每一种独立的线程有三个程序运营的输入、顺序施行类别和程序的言语。不过线程不可见独立实施,必须依存在应用程序中,由应用程序提供多少个线程实行调节。
    6. 各样线程都有她和谐的一组CPU寄存器,称为线程的上下文,该上下文反映了线程上次运营该线程的CPU寄存器的情景。
    7. 指令指针和货栈指针寄存器是线程上下文中多个最重大的寄存器,线程总是在进程获得上下文中运维的,那一个地点都用来标识拥有线程的历程地址空间中的内部存款和储蓄器。
    8. 线程能够被私吞(中断)。
    9. 在任何线程正在运营时,线程能够一时搁置(也称为睡眠) -- 那便是线程的妥洽。

    1.threading模块

    直接调用:
    import threading
    import time

    def code(num): #定义每个线程要运行的函数
    
        print("running on number:%s" %num)
    
        time.sleep(3)
    
    if __name__ == '__main__':
    
        t1 = threading.Thread(target=code,args=(1,)) #生成一个线程实例
        t2 = threading.Thread(target=code,args=(2,)) #生成另一个线程实例
    
        t1.start() #启动线程
        t2.start() #启动另一个线程
    
        print(t1.getName()) #获取线程名
        print(t2.getName())
    或者:
    #!/usr/bin/env python
    #coding:utf-8
    import threading
    import time
    class A(object):#定义每个线程要运行的函数
       def __init__(self,num):
            self.num = num
            self.run()
       def run(self):
           print('线程',self.num)
           time.sleep(1)
    for i in range(10):
    t = threading.Thread(target=A,args=(i,))#生成一个线程实例 target对应你要执行的函数名
    t.start()#启动线程
    

    承接类调用:

    import threading
    import time
    class MyThread(threading.Thread):#继承threading.Thread
        def __init__(self,num):
            threading.Thread.__init__(self)
            self.num = num
        def run(self):#定义每个线程要运行的函数
    
            print("我是第%s个程序" %self.num)
    
            time.sleep(3)#执行结束后等待三秒
    
    if __name__ == '__main__':
        t1 = MyThread(1)
        t2 = MyThread(2)
        t1.start()
        t2.start()
    或者:
    import threading
    import time
    class MyThread(threading.Thread):#继承threading.Thread
        def __init__(self,num):
            threading.Thread.__init__(self)
            self.num = num
        def run(self):#定义每个线程要运行的函数
    
            print("我是第%s个程序" %self.num)
    
            time.sleep(3)#执行结束后等待三秒
    
    if __name__ == '__main__':
        for i in range(10):
            t = MyThread(i)
            t.start()
    

    上述代码创设了13个“前台”线程,然后调节器就付给了CPU,CPU根据钦定算法实行调解,分片施行命令

    2.线程与经过的差距

    一个进程实际能够由五个线程的推行单元构成。各个线程都运作在进度的左右文中,并共享一样的代码和大局数据。

        线程是操作系统能够进行演算调治的小小单位。它被含有在经过之中,是经过中的实际运维单位。一条线程指的是进度中二个单一顺序的调节流,八个历程中可以并发三个线程,每条线程并行施行不一致的天职

    分明与办法:

    import threading
    第一导入threading 模块,那是使用八线程的前提。

    • start 线程盘算妥贴,等待CPU调治
    • setName 为线程设置名称
    • getName 获得线程名称
    • setDaemon 设置为后台线程或前台线程(私下认可)
      要是是后台线程,主线程实行进度中,后台线程也在进展,主线程试行实现后,后台线程不论成功与否,均停止
      如若是前台线程,主线程试行进度中,前台线程也在开展,主线程实行完毕后,等待前台线程也实行到位后,程序甘休
    • join 各个实践各样线程,推行完毕后一连往下实践,该格局使得二十四线程变得肤浅
    • run 线程被cpu调治后试行Thread类对象的run方法

    2.Join & Daemon

    线程共享内部存储器空间,                                                                                               进度的内部存款和储蓄器是单独的

    由于在实质上的互连网服务器中对相互的需求,线程成为越来越首要的编制程序模型,因为二十三十二线程之间比多进度之间更便于共享数据,同不经常候线程一般比进度更神速


    • 线程是操作系统能够实行演算调节的小不点儿单位。它被含有在经过中,是经过中的实际运作单位。一条线程指的是过程中二个纯净顺序的调节流,二个进度中国科高校并发四个线程,每条线程并行试行差别的天职。
    • OS调治CPU的矮小单位|线程:一群指令(调控流),线程是肩负履行的指令集
    • all the threads in a process have the same view of the memory在同贰个经过里的线程是共享同一块内部存储器空间的

    • IO操作不占用CPU(数据读取存款和储蓄),总括操作占用CPU(1 1...)
    • python二十四线程,不吻合CPU密集型操作,适合IO密集型操作

        每四个程序的内部存款和储蓄器是单独的,相互不可能平素访问。

    join

    1).join方法的效劳是阻塞主进程(挡住,无法实践join以往的语句),专注试行四线程。
    2).八线程多join的场馆下,依次实行各线程的join方法,前头一个得了了技艺推行前边二个。
    3).无参数,则等待到该线程甘休,才起来执行下二个线程的join。
    4.装置参数后,则等待该线程这么长日子就随意它了(而该线程并未终结)。
    不管的乐趣就是足以实施前面包车型大巴主进度了。

    例如:
    若果不采纳join

    import time
    import threading
    
    def run(n):
    
        print('正在运行[%s]n' % n)
        time.sleep(2)
        print('运行结束--')
    def main():
        for i in range(5):
            t = threading.Thread(target=run,args=[i,])
            #time.sleep(1)
            t.start()
            t.join(1)
            print('进行中的线程名', t.getName())
    #第一个执行的
    m = threading.Thread(target=main,args=[])
    m.start()
    print("---main thread done----")
    print('继续往下执行')
    

    结果如下:

    ---main thread done----  #线程还没结束就执行
    正在运行[0]
    继续往下执行               #线程还没结束就执行
    
    进行中的线程名 Thread-2
    正在运行[1]
    
    运行结束--
    进行中的线程名 Thread-3
    正在运行[2]
    
    运行结束--
    进行中的线程名 Thread-4
    正在运行[3]
    
    运行结束--
    进行中的线程名 Thread-5
    正在运行[4]
    
    运行结束--
    进行中的线程名 Thread-6
    运行结束--
    

    假如选取join:

    import time
    import threading
    
    def run(n):
    
        print('正在运行[%s]n' % n)
        time.sleep(1)
        print('运行结束--')
    def main():
        for i in range(5):
            t = threading.Thread(target=run,args=[i,])
            t.start()
            t.join(1)
            print('进行中的线程名', t.getName())
    #第一个执行的
    m = threading.Thread(target=main,args=[])
    m.start()
    m.join()#开启join
    print("---main thread done----") #结果是线程执行完毕之后 才执行
    print('继续往下执行')              #结果是线程执行完毕之后 才执行
    

    注:join(time)等time秒,如若time内未进行完就不一致了,继续往下施行
    如下:

    import time
    import threading
    
    def run(n):
    
        print('正在运行[%s]n' % n)
        time.sleep(1)
        print('运行结束--')
    def main():
        for i in range(5):
            t = threading.Thread(target=run,args=[i,])
            #time.sleep(1)
            t.start()
            t.join(1)
            print('进行中的线程名', t.getName())
    #第一个执行的
    m = threading.Thread(target=main,args=[])
    m.start()
    m.join(timeout=2) #设置时间
    print("---main thread done----")
    print('继续往下执行')
    

    结果:

    正在运行[0]
    
    进行中的线程名 Thread-2
    运行结束--  
    正在运行[1]
    
    运行结束--
    进行中的线程名 Thread-3
    ---main thread done----  #执行了
    继续往下执行               #执行了
    正在运行[2]
    
    运行结束--
    进行中的线程名 Thread-4
    正在运行[3]
    
    运行结束--
    进行中的线程名 Thread-5
    正在运行[4]
    
    运行结束--
    进行中的线程名 Thread-6
    

    线程共享成立它的长河的地址空间;                                                                        进度具备自个儿的地址空间。

    进程

        进程:

    daemon

    某个线程做后台任务,比方发送keepalive包,或进行垃圾收罗周期,等等。这个只是有用的主程序运转时,它能够杀死他们只要其余,非守护线程退出。
    从未有过守护程序线程,你要盯住他们,和报告他们退出,您的顺序可以完全脱离。通过设置它们当做医生和护师进度线程,你能够让她们运转和忘记他们,当程序退出时,任何守护程序线程自动被杀。

    import time
    import threading
    
    def run(n):
    
        print('正在运行[%s]n' % n)
        time.sleep(1)
        print('运行结束--')
    def main():
        for i in range(5):
            t = threading.Thread(target=run,args=[i,])
            time.sleep(1)
            t.start()
            t.join(1)
            print('进行中的线程名', t.getName())
    #第一个执行的
    m = threading.Thread(target=main,args=[])
    m.setDaemon(True)#将主线程设置为Daemon线程,它退出时,其它子线程会同时退出,不管是否执行完任务
    m.start()
    
    print("---main thread done----")
    print('继续往下执行')
    

    留意:守护程序线程突然停在闭馆。他们的能源(如打开的文本、数据库事务,等等)恐怕不会健康发布。假诺您想令你的线程截止优雅,让他俩non-daemonic和行使特出的信号机制等


    线程能够一向访问其经过的数据段;                                                                        进度具有父进程的数据段的自身的别本。

    先后并不能够独立和平运动行只有将次第装载到内部存款和储蓄器中,系统为她分配能源才具运行,而这种施行的主次就称为进度。

        以二个完好的花样暴露给操作系统管理,里面富含对各个财富的调用,内部存款和储蓄器的对各样财富管理的联谊就能够称之为进度。进程自己是不得以施行的,只是一群指令,操作系统是线程实施的。

    线程锁

    三个进度下得以运维八个线程,多个线程共享父进度的内部存款和储蓄器空间,也就代表每一个线程能够访问同一份数据,此时,假如2个线程同偶尔间要修改同一份数据那就能够油不过生数量修改会被不是叁个进程修改
    出于线程之间是进行自由调整,并且每种线程也许只实行n条施行之后,CPU接着实行其余线程。所以,或许出现如下难题:

    import time
    import threading
    
    def addNum(ip):
        global num #在每个线程中都获取这个全局变量
        print('--get num:',num,'线程数',ip )
        time.sleep(1)
        num   =1 #对此公共变量进行-1操作
        num_list.append(num)
    
    num = 0  #设定一个共享变量
    thread_list = []
    num_list =[]
    for i in range(10):
        t = threading.Thread(target=addNum,args=(i,))
        t.start()
        thread_list.append(t)
    
    for t in thread_list: #等待所有线程执行完毕
        t.join()
    
    print('final num:', num )
    print(num_list)
    

    结果:

    --get num: 0 线程数 0
    --get num: 0 线程数 1
    --get num: 0 线程数 2
    --get num: 0 线程数 3
    --get num: 0 线程数 4
    --get num: 0 线程数 5
    --get num: 0 线程数 6
    --get num: 0 线程数 7
    --get num: 0 线程数 8
    --get num: 0 线程数 9
    final num: 10
    [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
    

    例行来讲,这几个num结果应该是0, 但在python 2.7上多运营四回,会开掘,最后打字与印刷出来的num结果不总是0,为何老是运营的结果分化啊? 哈,很轻巧,即使你有A,B多少个线程,此时都 要对num 进行减1操作, 由于2个线程是出新同不时候运行的,所以2个线程很有非常大概率同期拿走了num=100以此起初变量交给cpu去运算,当A线程去处完的结果是99,但此刻B线程运算完的结果也是99,五个线程同临时候CPU运算的结果再赋值给num变量后,结果就都以99。这如何做吧? 异常粗略,种种线程在要修改公共数据时,为了制止本身在还没改完的时候外人也来修改此数据,可以给那个数额加一把锁, 那样任何线程想修改此数量时就必须等待你改改完结并把锁释放掉后手艺再拜访此数据。
    *注:不要在3.x上运营,不知缘何,3.x上的结果总是不错的,大概是电动加了锁

    累加锁之后

    import time   
    import threading
    
    def addNum():
        global num #在每个线程中都获取这个全局变量
        print('--get num:',num )
        time.sleep(1)
        lock.acquire() #修改数据前加锁
        num  -=1 #对此公共变量进行-1操作
        lock.release() #修改后释放
    num = 100  #设定一个共享变量
    thread_list = []
    lock = threading.Lock() #生成全局锁
    for i in range(100):
        t = threading.Thread(target=addNum)
        t.start()
        thread_list.append(t)
    
    for t in thread_list: #等待所有线程执行完毕
        t.join()
    
    print('final num:', num )
    

    RLock(递归锁)
    大概正是在三个大锁中还要再包涵子锁

    Semaphore(信号量)

    互斥锁 同期只允许一个线程更换数据,而Semaphore是还要同意一定数额的线程退换数据 ,举个例子厕全体3个坑,那最八只同意3个人上洗手间,前面包车型大巴人只好等中间有人出来了技巧再进入。

    event

    二个轩然大波是三个简练的联合签字对象;

    事件表示一个之中华夏族民共和国旗,和线程

    可以等待标记被设置,可能安装或解除标识本身。

    事件= threading.Event()

    、#客户端线程等待国旗能够设置

    event.wait()#服务器线程能够设置或重新恢复设置它

    event.set()

    event.clear()

    若是设置了国旗,等措施不做其余交事务。

    一经注解被清除,等待会阻塞,直到它再一次成为集。

    私行数量的线程可能等待同样的轩然大波。

    Python提供了伊夫nt对象用于线程间通讯,它是由线程设置的功率信号标记,倘使时限信号标识位真,则别的线程等待直到非功率信号接触。

    伊夫nt对象完毕了简短的线程通讯机制,它提供了安装连续信号,清楚实信号,等待等用于落实线程间的通讯。

    1 设置频域信号

    应用伊芙nt的set()方法可以安装伊夫nt对象内部的实信号标识为真。伊夫nt对象提供了isSet()方法来决断个中间连续信号标记的事态。当使用event对象的set()方法后,isSet()方法重返真

    2 清除实信号

    选拔伊芙nt对象的clear()方法可防止除伊夫nt对象内部的随机信号标识,将在其设为假,当使用伊芙nt的clear方法后,isSet()方法重回假

    3 等待

    伊芙nt对象wait的点子唯有在里边复信号为实在时候才会急迅的实施并做到重返。当伊夫nt对象的当中国国投号标记位假时,则wait方法平素守候到其为真时才回来。

    事件管理的编写制定:全局定义了多个“Flag”,就算“Flag”值为 False,那么当程序推行 event.wait 方法时就能够阻塞,若是“Flag”值为True,那么event.wait 方法时便不再阻塞。

    • clear:将“Flag”设置为False
    • set:将“Flag”设置为True

    案例:

    #!/usr/bin/env python
    #codfing:utf-8
    #__author__ = 'yaoyao'
    import threading
    def do(event):
        print ('最先执行')
        event.wait()
        print ('最后执行')
    event_obj = threading.Event()
    for i in range(10):
        t = threading.Thread(target=do, args=(event_obj,))
        t.start()
    print ('开始等待')
    event_obj.clear()
    inp = input('输入true:')
    if inp == 'true':
        event_obj.set()
    

    queque队列:
    队列是特意有用在线程编制程序时务必在四个线程之间调换安全消息。

    class queue.Queue(maxsize=0) #先入先出
    class queue.LifoQueue(maxsize=0) #last in fisrt out
    class queue.PriorityQueue(maxsize=0) #积存数据时可安装优先级的队列

    构造函数为二个早期队列。最大尺寸是整数集upperbound限制数量的物料可以放在队列中。插入块一旦达到那些尺寸,直到队列项。假若最大尺寸小于或等于零,队列大小是最最的。

    生产者消费模型

    线程能够直接与其进度的其余线程通讯;                                                                 进度必须选取进度间通讯与手足进程张开通讯。

    ## 程序和经过的区分在于:程序是命令的聚合,它是经过的静态描述文本;进程是程序的一遍实施活动,属于动态概念。

        表面看进度在执行,其实是线程在实践,一个历程至少含有三个线程。

    二、多进程

    案例:

    #!/usr/bin/env python
    #codfing:utf-8
    from multiprocessing import Process
    import threading
    import time
    
    def foo(i):
        print ('开始',i)
    if __name__ == "__main__":
        for i in range(10):
            p = Process(target=foo,args=(i,))
            p.start()
            print('我是华丽的分隔符')
    

    新线程很轻松成立;                                                                                                  新流程供给重新父流程。

    进程是操作系统对四个正在运作的程序的一种浮泛。即经过是计算机,主存,IO设备的肤浅

        线程:线程正是可进行的上下文,CPU实践所急需的十分的小单位。CPU只承担运算。单核的CPU同有的时候间只好做一件事情,为啥大家得以切换种种程序,是由于CPU的进行进度急忙,在往返切换,让咱们看起来程序是实践四个进程。

    瞩目:由于经过之间的数额须要各自有着一份,所以创造进度需求的可怜大的付出。

    线程能够对同一进度的线程举行非常程度的操纵;                                                   进度只可以对子进度展耗费配。

    操作系统能够同临时候运维多少个进程,而各类进程都类似在独占的选用硬件


    • 种种程序在内部存款和储蓄器里都分配有单独的空中,暗许进度间是不能够相互走访数据和操作的
    • (QQ,excel等)程序要以一个完整的样式暴露给操作系统管理,里面富含各个财富的调用(调用内存的军事管制、互联网接口的调用等),对种种能源管理的聚焦就足以称之为进度。
    • 比方说整个QQ就足以称作三个历程
    • 进度要操作CPU(即发送指令),必须先创造四个线程;
    • 进度自己无法实行,只是能源的集合,想要推行必须先生成操作系统进行调整运算的小不点儿单元-》线程;二个进度要试行,必须至少存有三个线程。当成立二个经过时,会活动创立一个线程

        操作系统通过PID,进程ID来分别进度。进度标志符,PID。过程能够设置优先级。

    经过数据共享

    经过各自具有一份数据,暗许不能共享数据
    比如:

    #!/usr/bin/env python
    #codfing:utf-8
    #__author__ = 'yaoyao'
    from multiprocessing import Process
    li = []
    
    def foo(i):
        li.append(i)
        print ('进程里的列表是',li)
    if __name__ == '__main__':
        for i in range(10):
            p = Process(target=foo,args=(i,))
            p.start()
    print ('打开列表 是空的',li)
    

    来得如下:

    打开列表 是空的 []
    进程里的列表是 [0]
    打开列表 是空的 []
    进程里的列表是 [2]
    打开列表 是空的 []
    进程里的列表是 [3]
    打开列表 是空的 []
    进程里的列表是 [1]
    打开列表 是空的 []
    进程里的列表是 [5]
    打开列表 是空的 []
    进程里的列表是 [4]
    打开列表 是空的 []
    打开列表 是空的 []
    进程里的列表是 [6]
    打开列表 是空的 []
    进程里的列表是 [7]
    打开列表 是空的 []
    进程里的列表是 [8]
    打开列表 是空的 []
    进程里的列表是 [9]
    

    共享数据二种办法:

    1. Array

      !/usr/bin/env python

      codfing:utf-8

      author = 'yaoyao'

      from multiprocessing import Process,Array
      temp = Array('i', [11,22,33,44])
      def Foo(i):
      temp[i] = 100 i
      for item in temp:
      print (i,'----->',item)

      if name == "main":
      for i in range(1):
      p = Process(target=Foo,args=(i,))
      p.start()
      2.manage.dict()

    协程

    协程,又称微线程,纤程。英文名Coroutine。一句话表达怎么样是线程:协程是一种用户态的轻量级线程。

    协程具备和睦的寄存器上下文和栈。协程调治切换时,将寄存器上下文和栈保存到其余省方,在切回到的时候,复苏原先保留的寄存器上下文和栈。由此:

    协程能保存上一遍调用时的动静(即具有片段情况的二个一定组合),每一趟经过重入时,就也等于进入上贰遍调用的图景,换种说法:进入上叁回离开时所处逻辑流的职位。

    协程的裨益:

    无需线程上下文切换的开销
    无需原子操作锁定及同步的开销
    方便切换控制流,简化编程模型
    高并发 高扩展性 低成本:一个CPU支持上万的协程都不是问题。所以很适合用于高并发处理。
    

    缺点:

    无法利用多核资源:协程的本质是个单线程,它不能同时将 单个CPU 的多个核用上,协程需要和进程配合才能运行在多CPU上.当然我们日常所编写的绝大部分应用都没有这个必要,除非是cpu密集型应用。
    进行阻塞(Blocking)操作(如IO时)会阻塞掉整个程序
    

    运用yield实现协程操作例子    

    import time
    import queue
    def consumer(name):
    print("--->starting eating baozi...")
    while True:
    new_baozi = yield
    print("[%s] is eating baozi %s" % (name,new_baozi))
    #time.sleep(1)

    def producer():

    r = con.__next__()
    r = con2.__next__()
    n = 0
    while n < 5:
        n  =1
        con.send(n)
        con2.send(n)
        print("33[32;1m[producer]33[0m is making baozi %s" %n )
    

    if name == 'main':
    con = consumer("c1")
    con2 = consumer("c2")
    p = producer()
    Greenlet

    对主线程的改动(撤废,优先级退换等)大概会潜移默化进度别的线程的行为;            对父进程的变动不会影响子进度。

    进程和线程的差别?

    • 线程共享创造它的历程的地方空间,进程的内部存款和储蓄器空间是单独的
    • 七个线程直接待上访问数据经过的数量,数据时共享的;四个父进度中的四个子进度对数码的走访其实是仿造,相互之间是独立的。
    • 线程能够直接与创制它的历程的任何线程通讯;二个父进度的子进度间的通讯必须透过二个中等代理来达成
    • 新的线程轻易创设;创造新进度需求对其父进度张开一回克隆
    • 线程能够对创造它的历程中的线程举行支配和操作,线程之间未有实际的专项关系;进程只可以对其子进程展费用配和操作
    • 对主线程的更改(打消、优先级改换等)只怕影响进度的别样线程的表现;对经过的更动不会影响子进度

        线程是有主线程成立的,primary thread;能够直接创制新的线程,Linux操作系统有一个主线程。

    !/usr/bin/env python

     

    八线程并发的例证

    import threading,time
    
    def run(n)
        print("task",n)
        time.sleep(2)
    
    t1 = threading.Thread(target=run,args=("t1",))#target=此线程要执行的代码块(函数);args=参数(不定个数参数,只有一个参数也需要加`,`,这里是元组形式)
    t2 = threading.Thread(target=run,args=("t2",))
    t1.start()
    t2.start()
    
    • 启航多少个线程
      ```python
      import threading,time

    def run(n)
    print("task",n)
    time.sleep(2)

    start_time = time.time()
    for i to range(50)
    t = threading.Thread(target=run,args=("t%s" %i ,))
    t.start()

    print('const',time.time()-start_time)
    ```

    • 此地总计的实行时间比2秒小好些个,因为主线程和由它运转的子线程是并行的

    • join()等待线程推行完结再持续也正是wait
      ```python
      import threading
      多线程与多进程,进程和线程。import time

    def run(n):
    print('task:',n)
    time.sleep(2)

    start_time = time.time()
    thread_list = []
    for i in range(50):
    t = threading.Thread(target=run,args=(i,))
    t.start()
    #假诺这里出席t.join()则等待各个线程试行完毕再拓展下三个线程,三十二线程产生了串行
    thread_list.append(t)

    for t in thread_list:
    t.join()#在线程运行后(start),插足join,等待全体创造的线程推行完成,再举办主线程

    print('cont:',time.time()-start_time)
    print(threading.current_thread(),threading.active_count())

        线程和进度的界别:

    -- coding:utf-8 --

    from greenlet import greenlet

    def test1():
    print 12
    gr2.switch()
    print 34
    gr2.switch()

    def test2():
    print 56
    gr1.switch()
    print 78

    gr1 = greenlet(test1)
    gr2 = greenlet(test2)
    gr1.switch()

      
    Gevent

    Gevent 是二个第三方库,能够轻易通过gevent完结产出同步或异步编程,在gevent中用到的要害情势是格林let, 它是以C扩大模块形式接入Python的轻量级协程。 格林let全体运作在主程序操作系统进度的内部,但它们被同盟式地调整。

    import gevent

    def foo():
    print('Running in foo')
    gevent.sleep(0)
    多线程与多进程,进程和线程。print('Explicit context switch to foo again')

    def bar():
    print('Explicit context to bar')
    gevent.sleep(0)
    print('Implicit context switch back to bar')

    gevent.joinall([
    gevent.spawn(foo),
    gevent.spawn(bar),])

    输出:

    Running in foo
    Explicit context to bar
    Explicit context switch to foo again
    Implicit context switch back to bar

    3.一条长河至少有一条线程

    threading.current_thread()呈现当前进度,threading.active_count()当前进度活跃个数

    ```

    • 此间结果为2秒多或多或少,总结时间正确,用于此现象时,join()必须在颇具线程的start()之后,不然成为二十四线程串行,多线程就无意义了

        线程和进度比快是未曾可比性的。

    4.线程锁
        每一种线程在要修改公共数据时,为了防止本人在还没改完的时候别人也来修改此数额,能够给这么些数目加一把锁, 那样任何线程想修改此数量时就不能够不等待你改改完成并把锁释放掉后手艺再拜访此数据

    守护线程

    • 不加jion()时,主线程和子线程是并行的,线程之间是相互关系;加了join(),加了join()的线程实践完成才会继续别的线程
    • 设为【守护线程】,主线程不等待子线程实行达成,直接实施;程序会等主线程推行实现,但不会等待守护线程
      ```python
      import threading
      import time

    def run(n):
    print('task:',n)
    time.sleep(2)

    start_time = time.time()
    thread_list = []
    for i in range(50):
    新葡亰496net,t = threading.Thread(target=run,args=(i,))
    t.t.setDaemon(True)#设置为护理线程,必须在start以前
    #守护=》仆人,守护主人(主进程/线程),主人down了,守护的奴婢直接结束
    t.start()
    thread_list.append(t)
    print('cont:',time.time()-start_time)

        1、线程共享内部存储器空间,进度的内部存款和储蓄器是单身的;

     

    主线程不是守护线程(也不足设置为守护线程),不等待子线程(设置为护理线程)等待2秒的日子,直接实施最终一句print()

    ```

        2、同一个进度的线程之间能够一贯调换,五个经过想通讯,必须通过四在那之中等代理来促成;

    5.Semaphore(信号量)

    线程锁

    • 每一种线程在要修改公共数据时,为了防止本人在还没改完的时候外人也来修改此数额,可以给这么些数额加一把锁, 那样任何线程想修改此数量时就不能够不等待你改改完结并把锁释放掉后工夫再拜访此数据。
    • 线程锁将线程变为串行

      def run(n):

      lock.acquire()#创建锁
      global num
      num  =1
      lock.relsase#释放锁
      

      lock = threading.Lock()#实例化锁 for i in range(50):

      t = threading.Thread(target=run,args=(i,))
      t.start()
      

      print('num:',num)

        3、新的线程轻易创设,成立新线程供给对其父进程张开一遍克隆;(parent process)

        互斥锁 同偶尔间只同意三个线程更动数据,而Semaphore是同期允许一定数量的线程更动数据 ,譬喻厕全体3个坑,那最七只允许3个人上厕所,前面包车型客车人不得不等内部有人出来了技能再进来。

    RLock(递归锁)

    • 多层锁的时候利用,说白了正是在多少个大锁中还要再包涵子锁
      ```python
      import threading,time

    def run1():
    print("grab the first part data")
    lock.acquire()
    global num
    num =1
    lock.release()
    return num
    def run2():
    print("grab the second part data")
    lock.acquire()
    global num2
    num2 =1
    lock.release()
    return num2
    def run3():
    lock.acquire()
    res = run1()
    print('--------between run1 and run2-----')
    res2 = run2()
    lock.release()
    print(res,res2)

    if name == 'main':

    num,num2 = 0,0
    lock = threading.RLock()
    for i in range(10):
        t = threading.Thread(target=run3)
        t.start()
    

    while threading.active_count() != 1:
    print(threading.active_count())
    else:
    print('----all threads done---')
    print(num,num2)
    ```

        4、四个线程能够垄断和操作同一进度里的其他线程,然而经过只好操作子进程;

     

    信号量(Semaphore)

    • 互斥锁(线程锁) 同期只允许二个线程更换数据,而塞马phore是还要同意一定数额的线程更动数据 ,比方厕全数3个坑,那最多只允许3个人上洗手间,前面包车型客车人不得不等内部有人出来了技术再进入。
    • 每释放三个锁,立时进贰个线程(举个例子socket_server中的并发数限制)

      import threading,time

      def run(n):

      semaphore.acquire()
      time.sleep(1)
      print("run the thread: %sn" %n)
      semaphore.release()
      

      if name == 'main':

      num= 0
      semaphore  = threading.BoundedSemaphore(5) #最多允许5个线程同时运行
      for i in range(20):
          t = threading.Thread(target=run,args=(i,))
          t.start()
      

      while threading.active_count() != 1:

      pass #print threading.active_count()
      

      else:

      print('----all threads done---')
      print(num)
      

        5、线程之间数据足以沟通,进度之间是不容许数据交换的。

    6.join的功能是 等待线程奉行实现

    承接式多线程

    • 貌似不用

        线程源代码:

     

    经过类的方式=》多线程

    import threading,time
    
    class MyThread(threading.Thread)
        def __inin__(self,n)
            super(MyThread,self).__init__(n)
            self.n = n
    
        def run(self)#这里方法名必须为run
            print("running task",self.n)
            time.sleep(2)
    
    t1 = MyThread(1)
    t2 = MyThread(2)
    t1.start()
    t2.start()
    

     

    7.练习

    """Thread module emulating a subset of Java's threading model."""
    
    import sys as _sys
    import _thread
    
    from time import monotonic as _time
    from traceback import format_exc as _format_exc
    from _weakrefset import WeakSet
    from itertools import islice as _islice, count as _count
    try:
        from _collections import deque as _deque
    except ImportError:
        from collections import deque as _deque
    
    # Note regarding PEP 8 compliant names
    #  This threading model was originally inspired by Java, and inherited
    # the convention of camelCase function and method names from that
    # language. Those original names are not in any imminent danger of
    # being deprecated (even for Py3k),so this module provides them as an
    # alias for the PEP 8 compliant names
    # Note that using the new PEP 8 compliant names facilitates substitution
    # with the multiprocessing module, which doesn't provide the old
    # Java inspired names.
    
    __all__ = ['active_count', 'Condition', 'current_thread', 'enumerate', 'Event',
               'Lock', 'RLock', 'Semaphore', 'BoundedSemaphore', 'Thread', 'Barrier',
               'Timer', 'ThreadError', 'setprofile', 'settrace', 'local', 'stack_size']
    
    # Rename some stuff so "from threading import *" is safe
    _start_new_thread = _thread.start_new_thread
    _allocate_lock = _thread.allocate_lock
    _set_sentinel = _thread._set_sentinel
    get_ident = _thread.get_ident
    ThreadError = _thread.error
    try:
        _CRLock = _thread.RLock
    except AttributeError:
        _CRLock = None
    TIMEOUT_MAX = _thread.TIMEOUT_MAX
    del _thread
    
    
    # Support for profile and trace hooks
    
    _profile_hook = None
    _trace_hook = None
    
    def setprofile(func):
        """Set a profile function for all threads started from the threading module.
    
        The func will be passed to sys.setprofile() for each thread, before its
        run() method is called.
    
        """
        global _profile_hook
        _profile_hook = func
    
    def settrace(func):
        """Set a trace function for all threads started from the threading module.
    
        The func will be passed to sys.settrace() for each thread, before its run()
        method is called.
    
        """
        global _trace_hook
        _trace_hook = func
    
    # Synchronization classes
    
    Lock = _allocate_lock
    
    def RLock(*args, **kwargs):
        """Factory function that returns a new reentrant lock.
    
        A reentrant lock must be released by the thread that acquired it. Once a
        thread has acquired a reentrant lock, the same thread may acquire it again
        without blocking; the thread must release it once for each time it has
        acquired it.
    
        """
        if _CRLock is None:
            return _PyRLock(*args, **kwargs)
        return _CRLock(*args, **kwargs)
    
    class _RLock:
        """This class implements reentrant lock objects.
    
        A reentrant lock must be released by the thread that acquired it. Once a
        thread has acquired a reentrant lock, the same thread may acquire it
        again without blocking; the thread must release it once for each time it
        has acquired it.
    
        """
    
        def __init__(self):
            self._block = _allocate_lock()
            self._owner = None
            self._count = 0
    
        def __repr__(self):
            owner = self._owner
            try:
                owner = _active[owner].name
            except KeyError:
                pass
            return "<%s %s.%s object owner=%r count=%d at %s>" % (
                "locked" if self._block.locked() else "unlocked",
                self.__class__.__module__,
                self.__class__.__qualname__,
                owner,
                self._count,
                hex(id(self))
            )
    
        def acquire(self, blocking=True, timeout=-1):
            """Acquire a lock, blocking or non-blocking.
    
            When invoked without arguments: if this thread already owns the lock,
            increment the recursion level by one, and return immediately. Otherwise,
            if another thread owns the lock, block until the lock is unlocked. Once
            the lock is unlocked (not owned by any thread), then grab ownership, set
            the recursion level to one, and return. If more than one thread is
            blocked waiting until the lock is unlocked, only one at a time will be
            able to grab ownership of the lock. There is no return value in this
            case.
    
            When invoked with the blocking argument set to true, do the same thing
            as when called without arguments, and return true.
    
            When invoked with the blocking argument set to false, do not block. If a
            call without an argument would block, return false immediately;
            otherwise, do the same thing as when called without arguments, and
            return true.
    
            When invoked with the floating-point timeout argument set to a positive
            value, block for at most the number of seconds specified by timeout
            and as long as the lock cannot be acquired.  Return true if the lock has
            been acquired, false if the timeout has elapsed.
    
            """
            me = get_ident()
            if self._owner == me:
                self._count  = 1
                return 1
            rc = self._block.acquire(blocking, timeout)
            if rc:
                self._owner = me
                self._count = 1
            return rc
    
        __enter__ = acquire
    
        def release(self):
            """Release a lock, decrementing the recursion level.
    
            If after the decrement it is zero, reset the lock to unlocked (not owned
            by any thread), and if any other threads are blocked waiting for the
            lock to become unlocked, allow exactly one of them to proceed. If after
            the decrement the recursion level is still nonzero, the lock remains
            locked and owned by the calling thread.
    
            Only call this method when the calling thread owns the lock. A
            RuntimeError is raised if this method is called when the lock is
            unlocked.
    
            There is no return value.
    
            """
            if self._owner != get_ident():
                raise RuntimeError("cannot release un-acquired lock")
            self._count = count = self._count - 1
            if not count:
                self._owner = None
                self._block.release()
    
        def __exit__(self, t, v, tb):
            self.release()
    
        # Internal methods used by condition variables
    
        def _acquire_restore(self, state):
            self._block.acquire()
            self._count, self._owner = state
    
        def _release_save(self):
            if self._count == 0:
                raise RuntimeError("cannot release un-acquired lock")
            count = self._count
            self._count = 0
            owner = self._owner
            self._owner = None
            self._block.release()
            return (count, owner)
    
        def _is_owned(self):
            return self._owner == get_ident()
    
    _PyRLock = _RLock
    
    
    class Condition:
        """Class that implements a condition variable.
    
        A condition variable allows one or more threads to wait until they are
        notified by another thread.
    
        If the lock argument is given and not None, it must be a Lock or RLock
        object, and it is used as the underlying lock. Otherwise, a new RLock object
        is created and used as the underlying lock.
    
        """
    
        def __init__(self, lock=None):
            if lock is None:
                lock = RLock()
            self._lock = lock
            # Export the lock's acquire() and release() methods
            self.acquire = lock.acquire
            self.release = lock.release
            # If the lock defines _release_save() and/or _acquire_restore(),
            # these override the default implementations (which just call
            # release() and acquire() on the lock).  Ditto for _is_owned().
            try:
                self._release_save = lock._release_save
            except AttributeError:
                pass
            try:
                self._acquire_restore = lock._acquire_restore
            except AttributeError:
                pass
            try:
                self._is_owned = lock._is_owned
            except AttributeError:
                pass
            self._waiters = _deque()
    
        def __enter__(self):
            return self._lock.__enter__()
    
        def __exit__(self, *args):
            return self._lock.__exit__(*args)
    
        def __repr__(self):
            return "<Condition(%s, %d)>" % (self._lock, len(self._waiters))
    
        def _release_save(self):
            self._lock.release()           # No state to save
    
        def _acquire_restore(self, x):
            self._lock.acquire()           # Ignore saved state
    
        def _is_owned(self):
            # Return True if lock is owned by current_thread.
            # This method is called only if _lock doesn't have _is_owned().
            if self._lock.acquire(0):
                self._lock.release()
                return False
            else:
                return True
    
        def wait(self, timeout=None):
            """Wait until notified or until a timeout occurs.
    
            If the calling thread has not acquired the lock when this method is
            called, a RuntimeError is raised.
    
            This method releases the underlying lock, and then blocks until it is
            awakened by a notify() or notify_all() call for the same condition
            variable in another thread, or until the optional timeout occurs. Once
            awakened or timed out, it re-acquires the lock and returns.
    
            When the timeout argument is present and not None, it should be a
            floating point number specifying a timeout for the operation in seconds
            (or fractions thereof).
    
            When the underlying lock is an RLock, it is not released using its
            release() method, since this may not actually unlock the lock when it
            was acquired multiple times recursively. Instead, an internal interface
            of the RLock class is used, which really unlocks it even when it has
            been recursively acquired several times. Another internal interface is
            then used to restore the recursion level when the lock is reacquired.
    
            """
            if not self._is_owned():
                raise RuntimeError("cannot wait on un-acquired lock")
            waiter = _allocate_lock()
            waiter.acquire()
            self._waiters.append(waiter)
            saved_state = self._release_save()
            gotit = False
            try:    # restore state no matter what (e.g., KeyboardInterrupt)
                if timeout is None:
                    waiter.acquire()
                    gotit = True
                else:
                    if timeout > 0:
                        gotit = waiter.acquire(True, timeout)
                    else:
                        gotit = waiter.acquire(False)
                return gotit
            finally:
                self._acquire_restore(saved_state)
                if not gotit:
                    try:
                        self._waiters.remove(waiter)
                    except ValueError:
                        pass
    
        def wait_for(self, predicate, timeout=None):
            """Wait until a condition evaluates to True.
    
            predicate should be a callable which result will be interpreted as a
            boolean value.  A timeout may be provided giving the maximum time to
            wait.
    
            """
            endtime = None
            waittime = timeout
            result = predicate()
            while not result:
                if waittime is not None:
                    if endtime is None:
                        endtime = _time()   waittime
                    else:
                        waittime = endtime - _time()
                        if waittime <= 0:
                            break
                self.wait(waittime)
                result = predicate()
            return result
    
        def notify(self, n=1):
            """Wake up one or more threads waiting on this condition, if any.
    
            If the calling thread has not acquired the lock when this method is
            called, a RuntimeError is raised.
    
            This method wakes up at most n of the threads waiting for the condition
            variable; it is a no-op if no threads are waiting.
    
            """
            if not self._is_owned():
                raise RuntimeError("cannot notify on un-acquired lock")
            all_waiters = self._waiters
            waiters_to_notify = _deque(_islice(all_waiters, n))
            if not waiters_to_notify:
                return
            for waiter in waiters_to_notify:
                waiter.release()
                try:
                    all_waiters.remove(waiter)
                except ValueError:
                    pass
    
        def notify_all(self):
            """Wake up all threads waiting on this condition.
    
            If the calling thread has not acquired the lock when this method
            is called, a RuntimeError is raised.
    
            """
            self.notify(len(self._waiters))
    
        notifyAll = notify_all
    
    
    class Semaphore:
        """This class implements semaphore objects.
    
        Semaphores manage a counter representing the number of release() calls minus
        the number of acquire() calls, plus an initial value. The acquire() method
        blocks if necessary until it can return without making the counter
        negative. If not given, value defaults to 1.
    
        """
    
        # After Tim Peters' semaphore class, but not quite the same (no maximum)
    
        def __init__(self, value=1):
            if value < 0:
                raise ValueError("semaphore initial value must be >= 0")
            self._cond = Condition(Lock())
            self._value = value
    
        def acquire(self, blocking=True, timeout=None):
            """Acquire a semaphore, decrementing the internal counter by one.
    
            When invoked without arguments: if the internal counter is larger than
            zero on entry, decrement it by one and return immediately. If it is zero
            on entry, block, waiting until some other thread has called release() to
            make it larger than zero. This is done with proper interlocking so that
            if multiple acquire() calls are blocked, release() will wake exactly one
            of them up. The implementation may pick one at random, so the order in
            which blocked threads are awakened should not be relied on. There is no
            return value in this case.
    
            When invoked with blocking set to true, do the same thing as when called
            without arguments, and return true.
    
            When invoked with blocking set to false, do not block. If a call without
            an argument would block, return false immediately; otherwise, do the
            same thing as when called without arguments, and return true.
    
            When invoked with a timeout other than None, it will block for at
            most timeout seconds.  If acquire does not complete successfully in
            that interval, return false.  Return true otherwise.
    
            """
            if not blocking and timeout is not None:
                raise ValueError("can't specify timeout for non-blocking acquire")
            rc = False
            endtime = None
            with self._cond:
                while self._value == 0:
                    if not blocking:
                        break
                    if timeout is not None:
                        if endtime is None:
                            endtime = _time()   timeout
                        else:
                            timeout = endtime - _time()
                            if timeout <= 0:
                                break
                    self._cond.wait(timeout)
                else:
                    self._value -= 1
                    rc = True
            return rc
    
        __enter__ = acquire
    
        def release(self):
            """Release a semaphore, incrementing the internal counter by one.
    
            When the counter is zero on entry and another thread is waiting for it
            to become larger than zero again, wake up that thread.
    
            """
            with self._cond:
                self._value  = 1
                self._cond.notify()
    
        def __exit__(self, t, v, tb):
            self.release()
    
    
    class BoundedSemaphore(Semaphore):
        """Implements a bounded semaphore.
    
        A bounded semaphore checks to make sure its current value doesn't exceed its
        initial value. If it does, ValueError is raised. In most situations
        semaphores are used to guard resources with limited capacity.
    
        If the semaphore is released too many times it's a sign of a bug. If not
        given, value defaults to 1.
    
        Like regular semaphores, bounded semaphores manage a counter representing
        the number of release() calls minus the number of acquire() calls, plus an
        initial value. The acquire() method blocks if necessary until it can return
        without making the counter negative. If not given, value defaults to 1.
    
        """
    
        def __init__(self, value=1):
            Semaphore.__init__(self, value)
            self._initial_value = value
    
        def release(self):
            """Release a semaphore, incrementing the internal counter by one.
    
            When the counter is zero on entry and another thread is waiting for it
            to become larger than zero again, wake up that thread.
    
            If the number of releases exceeds the number of acquires,
            raise a ValueError.
    
            """
            with self._cond:
                if self._value >= self._initial_value:
                    raise ValueError("Semaphore released too many times")
                self._value  = 1
                self._cond.notify()
    
    
    class Event:
        """Class implementing event objects.
    
        Events manage a flag that can be set to true with the set() method and reset
        to false with the clear() method. The wait() method blocks until the flag is
        true.  The flag is initially false.
    
        """
    
        # After Tim Peters' event class (without is_posted())
    
        def __init__(self):
            self._cond = Condition(Lock())
            self._flag = False
    
        def _reset_internal_locks(self):
            # private!  called by Thread._reset_internal_locks by _after_fork()
            self._cond.__init__(Lock())
    
        def is_set(self):
            """Return true if and only if the internal flag is true."""
            return self._flag
    
        isSet = is_set
    
        def set(self):
            """Set the internal flag to true.
    
            All threads waiting for it to become true are awakened. Threads
            that call wait() once the flag is true will not block at all.
    
            """
            with self._cond:
                self._flag = True
                self._cond.notify_all()
    
        def clear(self):
            """Reset the internal flag to false.
    
            Subsequently, threads calling wait() will block until set() is called to
            set the internal flag to true again.
    
            """
            with self._cond:
                self._flag = False
    
        def wait(self, timeout=None):
            """Block until the internal flag is true.
    
            If the internal flag is true on entry, return immediately. Otherwise,
            block until another thread calls set() to set the flag to true, or until
            the optional timeout occurs.
    
            When the timeout argument is present and not None, it should be a
            floating point number specifying a timeout for the operation in seconds
            (or fractions thereof).
    
            This method returns the internal flag on exit, so it will always return
            True except if a timeout is given and the operation times out.
    
            """
            with self._cond:
                signaled = self._flag
                if not signaled:
                    signaled = self._cond.wait(timeout)
                return signaled
    
    
    # A barrier class.  Inspired in part by the pthread_barrier_* api and
    # the CyclicBarrier class from Java.  See
    # http://sourceware.org/pthreads-win32/manual/pthread_barrier_init.html and
    # http://java.sun.com/j2se/1.5.0/docs/api/java/util/concurrent/
    #        CyclicBarrier.html
    # for information.
    # We maintain two main states, 'filling' and 'draining' enabling the barrier
    # to be cyclic.  Threads are not allowed into it until it has fully drained
    # since the previous cycle.  In addition, a 'resetting' state exists which is
    # similar to 'draining' except that threads leave with a BrokenBarrierError,
    # and a 'broken' state in which all threads get the exception.
    class Barrier:
        """Implements a Barrier.
    
        Useful for synchronizing a fixed number of threads at known synchronization
        points.  Threads block on 'wait()' and are simultaneously once they have all
        made that call.
    
        """
    
        def __init__(self, parties, action=None, timeout=None):
            """Create a barrier, initialised to 'parties' threads.
    
            'action' is a callable which, when supplied, will be called by one of
            the threads after they have all entered the barrier and just prior to
            releasing them all. If a 'timeout' is provided, it is uses as the
            default for all subsequent 'wait()' calls.
    
            """
            self._cond = Condition(Lock())
            self._action = action
            self._timeout = timeout
            self._parties = parties
            self._state = 0 #0 filling, 1, draining, -1 resetting, -2 broken
            self._count = 0
    
        def wait(self, timeout=None):
            """Wait for the barrier.
    
            When the specified number of threads have started waiting, they are all
            simultaneously awoken. If an 'action' was provided for the barrier, one
            of the threads will have executed that callback prior to returning.
            Returns an individual index number from 0 to 'parties-1'.
    
            """
            if timeout is None:
                timeout = self._timeout
            with self._cond:
                self._enter() # Block while the barrier drains.
                index = self._count
                self._count  = 1
                try:
                    if index   1 == self._parties:
                        # We release the barrier
                        self._release()
                    else:
                        # We wait until someone releases us
                        self._wait(timeout)
                    return index
                finally:
                    self._count -= 1
                    # Wake up any threads waiting for barrier to drain.
                    self._exit()
    
        # Block until the barrier is ready for us, or raise an exception
        # if it is broken.
        def _enter(self):
            while self._state in (-1, 1):
                # It is draining or resetting, wait until done
                self._cond.wait()
            #see if the barrier is in a broken state
            if self._state < 0:
                raise BrokenBarrierError
            assert self._state == 0
    
        # Optionally run the 'action' and release the threads waiting
        # in the barrier.
        def _release(self):
            try:
                if self._action:
                    self._action()
                # enter draining state
                self._state = 1
                self._cond.notify_all()
            except:
                #an exception during the _action handler.  Break and reraise
                self._break()
                raise
    
        # Wait in the barrier until we are relased.  Raise an exception
        # if the barrier is reset or broken.
        def _wait(self, timeout):
            if not self._cond.wait_for(lambda : self._state != 0, timeout):
                #timed out.  Break the barrier
                self._break()
                raise BrokenBarrierError
            if self._state < 0:
                raise BrokenBarrierError
            assert self._state == 1
    
        # If we are the last thread to exit the barrier, signal any threads
        # waiting for the barrier to drain.
        def _exit(self):
            if self._count == 0:
                if self._state in (-1, 1):
                    #resetting or draining
                    self._state = 0
                    self._cond.notify_all()
    
        def reset(self):
            """Reset the barrier to the initial state.
    
            Any threads currently waiting will get the BrokenBarrier exception
            raised.
    
            """
            with self._cond:
                if self._count > 0:
                    if self._state == 0:
                        #reset the barrier, waking up threads
                        self._state = -1
                    elif self._state == -2:
                        #was broken, set it to reset state
                        #which clears when the last thread exits
                        self._state = -1
                else:
                    self._state = 0
                self._cond.notify_all()
    
        def abort(self):
            """Place the barrier into a 'broken' state.
    
            Useful in case of error.  Any currently waiting threads and threads
            attempting to 'wait()' will have BrokenBarrierError raised.
    
            """
            with self._cond:
                self._break()
    
        def _break(self):
            # An internal error was detected.  The barrier is set to
            # a broken state all parties awakened.
            self._state = -2
            self._cond.notify_all()
    
        @property
        def parties(self):
            """Return the number of threads required to trip the barrier."""
            return self._parties
    
        @property
        def n_waiting(self):
            """Return the number of threads currently waiting at the barrier."""
            # We don't need synchronization here since this is an ephemeral result
            # anyway.  It returns the correct value in the steady state.
            if self._state == 0:
                return self._count
            return 0
    
        @property
        def broken(self):
            """Return True if the barrier is in a broken state."""
            return self._state == -2
    
    # exception raised by the Barrier class
    class BrokenBarrierError(RuntimeError):
        pass
    
    
    # Helper to generate new thread names
    _counter = _count().__next__
    _counter() # Consume 0 so first non-main thread has id 1.
    def _newname(template="Thread-%d"):
        return template % _counter()
    
    # Active thread administration
    _active_limbo_lock = _allocate_lock()
    _active = {}    # maps thread id to Thread object
    _limbo = {}
    _dangling = WeakSet()
    
    # Main class for threads
    
    class Thread:
        """A class that represents a thread of control.
    
        This class can be safely subclassed in a limited fashion. There are two ways
        to specify the activity: by passing a callable object to the constructor, or
        by overriding the run() method in a subclass.
    
        """
    
        _initialized = False
        # Need to store a reference to sys.exc_info for printing
        # out exceptions when a thread tries to use a global var. during interp.
        # shutdown and thus raises an exception about trying to perform some
        # operation on/with a NoneType
        _exc_info = _sys.exc_info
        # Keep sys.exc_clear too to clear the exception just before
        # allowing .join() to return.
        #XXX __exc_clear = _sys.exc_clear
    
        def __init__(self, group=None, target=None, name=None,
                     args=(), kwargs=None, *, daemon=None):
            """This constructor should always be called with keyword arguments. Arguments are:
    
            *group* should be None; reserved for future extension when a ThreadGroup
            class is implemented.
    
            *target* is the callable object to be invoked by the run()
            method. Defaults to None, meaning nothing is called.
    
            *name* is the thread name. By default, a unique name is constructed of
            the form "Thread-N" where N is a small decimal number.
    
            *args* is the argument tuple for the target invocation. Defaults to ().
    
            *kwargs* is a dictionary of keyword arguments for the target
            invocation. Defaults to {}.
    
            If a subclass overrides the constructor, it must make sure to invoke
            the base class constructor (Thread.__init__()) before doing anything
            else to the thread.
    
            """
            assert group is None, "group argument must be None for now"
            if kwargs is None:
                kwargs = {}
            self._target = target
            self._name = str(name or _newname())
            self._args = args
            self._kwargs = kwargs
            if daemon is not None:
                self._daemonic = daemon
            else:
                self._daemonic = current_thread().daemon
            self._ident = None
            self._tstate_lock = None
            self._started = Event()
            self._is_stopped = False
            self._initialized = True
            # sys.stderr is not stored in the class like
            # sys.exc_info since it can be changed between instances
            self._stderr = _sys.stderr
            # For debugging and _after_fork()
            _dangling.add(self)
    
        def _reset_internal_locks(self, is_alive):
            # private!  Called by _after_fork() to reset our internal locks as
            # they may be in an invalid state leading to a deadlock or crash.
            self._started._reset_internal_locks()
            if is_alive:
                self._set_tstate_lock()
            else:
                # The thread isn't alive after fork: it doesn't have a tstate
                # anymore.
                self._is_stopped = True
                self._tstate_lock = None
    
        def __repr__(self):
            assert self._initialized, "Thread.__init__() was not called"
            status = "initial"
            if self._started.is_set():
                status = "started"
            self.is_alive() # easy way to get ._is_stopped set when appropriate
            if self._is_stopped:
                status = "stopped"
            if self._daemonic:
                status  = " daemon"
            if self._ident is not None:
                status  = " %s" % self._ident
            return "<%s(%s, %s)>" % (self.__class__.__name__, self._name, status)
    
        def start(self):
            """Start the thread's activity.
    
            It must be called at most once per thread object. It arranges for the
            object's run() method to be invoked in a separate thread of control.
    
            This method will raise a RuntimeError if called more than once on the
            same thread object.
    
            """
            if not self._initialized:
                raise RuntimeError("thread.__init__() not called")
    
            if self._started.is_set():
                raise RuntimeError("threads can only be started once")
            with _active_limbo_lock:
                _limbo[self] = self
            try:
                _start_new_thread(self._bootstrap, ())
            except Exception:
                with _active_limbo_lock:
                    del _limbo[self]
                raise
            self._started.wait()
    
        def run(self):
            """Method representing the thread's activity.
    
            You may override this method in a subclass. The standard run() method
            invokes the callable object passed to the object's constructor as the
            target argument, if any, with sequential and keyword arguments taken
            from the args and kwargs arguments, respectively.
    
            """
            try:
                if self._target:
                    self._target(*self._args, **self._kwargs)
            finally:
                # Avoid a refcycle if the thread is running a function with
                # an argument that has a member that points to the thread.
                del self._target, self._args, self._kwargs
    
        def _bootstrap(self):
            # Wrapper around the real bootstrap code that ignores
            # exceptions during interpreter cleanup.  Those typically
            # happen when a daemon thread wakes up at an unfortunate
            # moment, finds the world around it destroyed, and raises some
            # random exception *** while trying to report the exception in
            # _bootstrap_inner() below ***.  Those random exceptions
            # don't help anybody, and they confuse users, so we suppress
            # them.  We suppress them only when it appears that the world
            # indeed has already been destroyed, so that exceptions in
            # _bootstrap_inner() during normal business hours are properly
            # reported.  Also, we only suppress them for daemonic threads;
            # if a non-daemonic encounters this, something else is wrong.
            try:
                self._bootstrap_inner()
            except:
                if self._daemonic and _sys is None:
                    return
                raise
    
        def _set_ident(self):
            self._ident = get_ident()
    
        def _set_tstate_lock(self):
            """
            Set a lock object which will be released by the interpreter when
            the underlying thread state (see pystate.h) gets deleted.
            """
            self._tstate_lock = _set_sentinel()
            self._tstate_lock.acquire()
    
        def _bootstrap_inner(self):
            try:
                self._set_ident()
                self._set_tstate_lock()
                self._started.set()
                with _active_limbo_lock:
                    _active[self._ident] = self
                    del _limbo[self]
    
                if _trace_hook:
                    _sys.settrace(_trace_hook)
                if _profile_hook:
                    _sys.setprofile(_profile_hook)
    
                try:
                    self.run()
                except SystemExit:
                    pass
                except:
                    # If sys.stderr is no more (most likely from interpreter
                    # shutdown) use self._stderr.  Otherwise still use sys (as in
                    # _sys) in case sys.stderr was redefined since the creation of
                    # self.
                    if _sys and _sys.stderr is not None:
                        print("Exception in thread %s:n%s" %
                              (self.name, _format_exc()), file=_sys.stderr)
                    elif self._stderr is not None:
                        # Do the best job possible w/o a huge amt. of code to
                        # approximate a traceback (code ideas from
                        # Lib/traceback.py)
                        exc_type, exc_value, exc_tb = self._exc_info()
                        try:
                            print((
                                "Exception in thread "   self.name  
                                " (most likely raised during interpreter shutdown):"), file=self._stderr)
                            print((
                                "Traceback (most recent call last):"), file=self._stderr)
                            while exc_tb:
                                print((
                                    '  File "%s", line %s, in %s' %
                                    (exc_tb.tb_frame.f_code.co_filename,
                                        exc_tb.tb_lineno,
                                        exc_tb.tb_frame.f_code.co_name)), file=self._stderr)
                                exc_tb = exc_tb.tb_next
                            print(("%s: %s" % (exc_type, exc_value)), file=self._stderr)
                        # Make sure that exc_tb gets deleted since it is a memory
                        # hog; deleting everything else is just for thoroughness
                        finally:
                            del exc_type, exc_value, exc_tb
                finally:
                    # Prevent a race in
                    # test_threading.test_no_refcycle_through_target when
                    # the exception keeps the target alive past when we
                    # assert that it's dead.
                    #XXX self._exc_clear()
                    pass
            finally:
                with _active_limbo_lock:
                    try:
                        # We don't call self._delete() because it also
                        # grabs _active_limbo_lock.
                        del _active[get_ident()]
                    except:
                        pass
    
        def _stop(self):
            # After calling ._stop(), .is_alive() returns False and .join() returns
            # immediately.  ._tstate_lock must be released before calling ._stop().
            #
            # Normal case:  C code at the end of the thread's life
            # (release_sentinel in _threadmodule.c) releases ._tstate_lock, and
            # that's detected by our ._wait_for_tstate_lock(), called by .join()
            # and .is_alive().  Any number of threads _may_ call ._stop()
            # simultaneously (for example, if multiple threads are blocked in
            # .join() calls), and they're not serialized.  That's harmless -
            # they'll just make redundant rebindings of ._is_stopped and
            # ._tstate_lock.  Obscure:  we rebind ._tstate_lock last so that the
            # "assert self._is_stopped" in ._wait_for_tstate_lock() always works
            # (the assert is executed only if ._tstate_lock is None).
            #
            # Special case:  _main_thread releases ._tstate_lock via this
            # module's _shutdown() function.
            lock = self._tstate_lock
            if lock is not None:
                assert not lock.locked()
            self._is_stopped = True
            self._tstate_lock = None
    
        def _delete(self):
            "Remove current thread from the dict of currently running threads."
    
            # Notes about running with _dummy_thread:
            #
            # Must take care to not raise an exception if _dummy_thread is being
            # used (and thus this module is being used as an instance of
            # dummy_threading).  _dummy_thread.get_ident() always returns -1 since
            # there is only one thread if _dummy_thread is being used.  Thus
            # len(_active) is always <= 1 here, and any Thread instance created
            # overwrites the (if any) thread currently registered in _active.
            #
            # An instance of _MainThread is always created by 'threading'.  This
            # gets overwritten the instant an instance of Thread is created; both
            # threads return -1 from _dummy_thread.get_ident() and thus have the
            # same key in the dict.  So when the _MainThread instance created by
            # 'threading' tries to clean itself up when atexit calls this method
            # it gets a KeyError if another Thread instance was created.
            #
            # This all means that KeyError from trying to delete something from
            # _active if dummy_threading is being used is a red herring.  But
            # since it isn't if dummy_threading is *not* being used then don't
            # hide the exception.
    
            try:
                with _active_limbo_lock:
                    del _active[get_ident()]
                    # There must not be any python code between the previous line
                    # and after the lock is released.  Otherwise a tracing function
                    # could try to acquire the lock again in the same thread, (in
                    # current_thread()), and would block.
            except KeyError:
                if 'dummy_threading' not in _sys.modules:
                    raise
    
        def join(self, timeout=None):
            """Wait until the thread terminates.
    
            This blocks the calling thread until the thread whose join() method is
            called terminates -- either normally or through an unhandled exception
            or until the optional timeout occurs.
    
            When the timeout argument is present and not None, it should be a
            floating point number specifying a timeout for the operation in seconds
            (or fractions thereof). As join() always returns None, you must call
            isAlive() after join() to decide whether a timeout happened -- if the
            thread is still alive, the join() call timed out.
    
            When the timeout argument is not present or None, the operation will
            block until the thread terminates.
    
            A thread can be join()ed many times.
    
            join() raises a RuntimeError if an attempt is made to join the current
            thread as that would cause a deadlock. It is also an error to join() a
            thread before it has been started and attempts to do so raises the same
            exception.
    
            """
            if not self._initialized:
                raise RuntimeError("Thread.__init__() not called")
            if not self._started.is_set():
                raise RuntimeError("cannot join thread before it is started")
            if self is current_thread():
                raise RuntimeError("cannot join current thread")
    
            if timeout is None:
                self._wait_for_tstate_lock()
            else:
                # the behavior of a negative timeout isn't documented, but
                # historically .join(timeout=x) for x<0 has acted as if timeout=0
                self._wait_for_tstate_lock(timeout=max(timeout, 0))
    
        def _wait_for_tstate_lock(self, block=True, timeout=-1):
            # Issue #18808: wait for the thread state to be gone.
            # At the end of the thread's life, after all knowledge of the thread
            # is removed from C data structures, C code releases our _tstate_lock.
            # This method passes its arguments to _tstate_lock.acquire().
            # If the lock is acquired, the C code is done, and self._stop() is
            # called.  That sets ._is_stopped to True, and ._tstate_lock to None.
            lock = self._tstate_lock
            if lock is None:  # already determined that the C code is done
                assert self._is_stopped
            elif lock.acquire(block, timeout):
                lock.release()
                self._stop()
    
        @property
        def name(self):
            """A string used for identification purposes only.
    
            It has no semantics. Multiple threads may be given the same name. The
            initial name is set by the constructor.
    
            """
            assert self._initialized, "Thread.__init__() not called"
            return self._name
    
        @name.setter
        def name(self, name):
            assert self._initialized, "Thread.__init__() not called"
            self._name = str(name)
    
        @property
        def ident(self):
            """Thread identifier of this thread or None if it has not been started.
    
            This is a nonzero integer. See the thread.get_ident() function. Thread
            identifiers may be recycled when a thread exits and another thread is
            created. The identifier is available even after the thread has exited.
    
            """
            assert self._initialized, "Thread.__init__() not called"
            return self._ident
    
        def is_alive(self):
            """Return whether the thread is alive.
    
            This method returns True just before the run() method starts until just
            after the run() method terminates. The module function enumerate()
            returns a list of all alive threads.
    
            """
            assert self._initialized, "Thread.__init__() not called"
            if self._is_stopped or not self._started.is_set():
                return False
            self._wait_for_tstate_lock(False)
            return not self._is_stopped
    
        isAlive = is_alive
    
        @property
        def daemon(self):
            """A boolean value indicating whether this thread is a daemon thread.
    
            This must be set before start() is called, otherwise RuntimeError is
            raised. Its initial value is inherited from the creating thread; the
            main thread is not a daemon thread and therefore all threads created in
            the main thread default to daemon = False.
    
            The entire Python program exits when no alive non-daemon threads are
            left.
    
            """
            assert self._initialized, "Thread.__init__() not called"
            return self._daemonic
    
        @daemon.setter
        def daemon(self, daemonic):
            if not self._initialized:
                raise RuntimeError("Thread.__init__() not called")
            if self._started.is_set():
                raise RuntimeError("cannot set daemon status of active thread")
            self._daemonic = daemonic
    
        def isDaemon(self):
            return self.daemon
    
        def setDaemon(self, daemonic):
            self.daemon = daemonic
    
        def getName(self):
            return self.name
    
        def setName(self, name):
            self.name = name
    
    # The timer class was contributed by Itamar Shtull-Trauring
    
    class Timer(Thread):
        """Call a function after a specified number of seconds:
    
                t = Timer(30.0, f, args=None, kwargs=None)
                t.start()
                t.cancel()     # stop the timer's action if it's still waiting
    
        """
    
        def __init__(self, interval, function, args=None, kwargs=None):
            Thread.__init__(self)
            self.interval = interval
            self.function = function
            self.args = args if args is not None else []
            self.kwargs = kwargs if kwargs is not None else {}
            self.finished = Event()
    
        def cancel(self):
            """Stop the timer if it hasn't finished yet."""
            self.finished.set()
    
        def run(self):
            self.finished.wait(self.interval)
            if not self.finished.is_set():
                self.function(*self.args, **self.kwargs)
            self.finished.set()
    
    # Special thread class to represent the main thread
    # This is garbage collected through an exit handler
    
    class _MainThread(Thread):
    
        def __init__(self):
            Thread.__init__(self, name="MainThread", daemon=False)
            self._set_tstate_lock()
            self._started.set()
            self._set_ident()
            with _active_limbo_lock:
                _active[self._ident] = self
    
    
    # Dummy thread class to represent threads not started here.
    # These aren't garbage collected when they die, nor can they be waited for.
    # If they invoke anything in threading.py that calls current_thread(), they
    # leave an entry in the _active dict forever after.
    # Their purpose is to return *something* from current_thread().
    # They are marked as daemon threads so we won't wait for them
    # when we exit (conform previous semantics).
    
    class _DummyThread(Thread):
    
        def __init__(self):
            Thread.__init__(self, name=_newname("Dummy-%d"), daemon=True)
    
            self._started.set()
            self._set_ident()
            with _active_limbo_lock:
                _active[self._ident] = self
    
        def _stop(self):
            pass
    
        def join(self, timeout=None):
            assert False, "cannot join a dummy thread"
    
    
    # Global API functions
    
    def current_thread():
        """Return the current Thread object, corresponding to the caller's thread of control.
    
        If the caller's thread of control was not created through the threading
        module, a dummy thread object with limited functionality is returned.
    
        """
        try:
            return _active[get_ident()]
        except KeyError:
            return _DummyThread()
    
    currentThread = current_thread
    
    def active_count():
        """Return the number of Thread objects currently alive.
    
        The returned count is equal to the length of the list returned by
        enumerate().
    
        """
        with _active_limbo_lock:
            return len(_active)   len(_limbo)
    
    activeCount = active_count
    
    def _enumerate():
        # Same as enumerate(), but without the lock. Internal use only.
        return list(_active.values())   list(_limbo.values())
    
    def enumerate():
        """Return a list of all Thread objects currently alive.
    
        The list includes daemonic threads, dummy thread objects created by
        current_thread(), and the main thread. It excludes terminated threads and
        threads that have not yet been started.
    
        """
        with _active_limbo_lock:
            return list(_active.values())   list(_limbo.values())
    
    from _thread import stack_size
    
    # Create the main thread object,
    # and make it available for the interpreter
    # (Py_Main) as threading._shutdown.
    
    _main_thread = _MainThread()
    
    def _shutdown():
        # Obscure:  other threads may be waiting to join _main_thread.  That's
        # dubious, but some code does it.  We can't wait for C code to release
        # the main thread's tstate_lock - that won't happen until the interpreter
        # is nearly dead.  So we release it here.  Note that just calling _stop()
        # isn't enough:  other threads may already be waiting on _tstate_lock.
        tlock = _main_thread._tstate_lock
        # The main thread isn't finished yet, so its thread state lock can't have
        # been released.
        assert tlock is not None
        assert tlock.locked()
        tlock.release()
        _main_thread._stop()
        t = _pickSomeNonDaemonThread()
        while t:
            t.join()
            t = _pickSomeNonDaemonThread()
        _main_thread._delete()
    
    def _pickSomeNonDaemonThread():
        for t in enumerate():
            if not t.daemon and t.is_alive():
                return t
        return None
    
    def main_thread():
        """Return the main thread object.
    
        In normal conditions, the main thread is the thread from which the
        Python interpreter was started.
        """
        return _main_thread
    
    # get thread-local implementation, either from the thread
    # module, or from the python fallback
    
    try:
        from _thread import _local as local
    except ImportError:
        from _threading_local import local
    
    
    def _after_fork():
        # This function is called by Python/ceval.c:PyEval_ReInitThreads which
        # is called from PyOS_AfterFork.  Here we cleanup threading module state
        # that should not exist after a fork.
    
        # Reset _active_limbo_lock, in case we forked while the lock was held
        # by another (non-forked) thread.  http://bugs.python.org/issue874900
        global _active_limbo_lock, _main_thread
        _active_limbo_lock = _allocate_lock()
    
        # fork() only copied the current thread; clear references to others.
        new_active = {}
        current = current_thread()
        _main_thread = current
        with _active_limbo_lock:
            # Dangling thread instances must still have their locks reset,
            # because someone may join() them.
            threads = set(_enumerate())
            threads.update(_dangling)
            for thread in threads:
                # Any lock/condition variable may be currently locked or in an
                # invalid state, so we reinitialize them.
                if thread is current:
                    # There is only one active thread. We reset the ident to
                    # its new value since it can have changed.
                    thread._reset_internal_locks(True)
                    ident = get_ident()
                    thread._ident = ident
                    new_active[ident] = thread
                else:
                    # All the others are already stopped.
                    thread._reset_internal_locks(False)
                    thread._stop()
    
            _limbo.clear()
            _active.clear()
            _active.update(new_active)
            assert len(_active) == 1
    

    信号量

     

    __author__ = "Narwhale"
    
    import threading,time
    
    def run(n):
        semaphore.acquire()
        time.sleep(1)
        print('线程%s在跑!'%n)
        semaphore.release()
    
    if __name__ == '__main__':
        semaphore = threading.BoundedSemaphore(5)      #最多5个线程同时跑
        for i in range(20):
            t = threading.Thread(target=run,args=(i,))
            t.start()
    
    while threading.active_count() !=1:
        pass
    else:
        print('所有线程跑完了!')
    

        线程实例:

    劳动者消费者模型

        Python threading模块

    __author__ = "Narwhale"
    import queue,time,threading
    q = queue.Queue(10)
    
    def producer(name):
        count = 0
        while True:
            print('%s生产了包子%s'%(name,count))
            q.put('包子%s'%count)
            count  = 1
            time.sleep(1)
    
    def consumer(name):
        while True:
            print('%s取走了%s,并且吃了它。。。。。'%(name,q.get()))
            time.sleep(1)
    
    
    A1 = threading.Thread(target=producer,args=('A1',))
    A1.start()
    
    B1 = threading.Thread(target=consumer,args=('B1',))
    B1.start()
    B2 = threading.Thread(target=consumer,args=('B2',))
    B2.start()
    

        线程有2种调用格局,如下:

    红绿灯

        直接调用

    __author__ = "Narwhale"
    
    import threading,time
    
    event = threading.Event()
    
    def light():
        event.set()
        count = 0
        while True:
            if count >5 and count < 10:
                event.clear()
                print('33[41;1m红灯亮了33[0m' )
            elif count > 10:
                event.set()
                count = 0
            else:
                print('33[42;1m绿灯亮了33[0m')
            time.sleep(1)
            count  =1
    
    
    def car(n):
        while True:
            if event.isSet():
                print('33[34;1m%s车正在跑!33[0m'%n)
                time.sleep(1)
            else:
                print('车停下来了')
                event.wait()
    
    light = threading.Thread(target=light,args=( ))
    light.start()
    car1 = threading.Thread(target=car,args=('Tesla',))
    car1.start()
    

     

     

    import threading,time
    
    def func(num):
        print("The lucky num is ",num)
        time.sleep(2)
    
    
    if __name__ == "__main__":
        start_time = time.time()
        t1 = threading.Thread(target=func,args=(6,))
        t2 = threading.Thread(target=func,args=(9,))
        t1.start()
        t2.start()
        end_time = time.time()
        run_time = end_time-start_time
        print("33[34;1m程序运行时间:33[0m",run_time)
    
    
        time1 = time.time()
        func(6)
        func(9)
        time2 = time.time()
        run_time2 = time2 - time1
        print("33[32m直接执行需要时间:33[0m",run_time2)
    执行结果如下:
    The lucky num is  6
    The lucky num is  9
    程序运行时间: 0.00044083595275878906
    The lucky num is  6
    The lucky num is  9
    直接执行需要时间: 4.002933979034424
    

     

        从地点代码能够见见,我们采纳的是线程,threading.Thread,线程里面target=func(函数名),args=(参数,),能够见到,线程的进度极快,运行四个线程施行必要中间相当短,可是那只是运转线程的年华,IO操作实际并从未举办,今年,程序还尚未实行完成,可是线程是不管的,直接会向下施行,而串行的先后则不平等,一行一行推行,因而运营的小时哪怕增大的。

        所以下边,第叁个小时只是线程运转进度中消费的大运,并从未算IO操作的年月,IO操作等待的时候,线程会向下进行,不会等待程序实践,接着往下运维,只有等到上边也是有IO操作的时候,才会看下边是或不是进行达成,下边线程推行达成则打字与印刷,可是无论如何,最后都会等待程序奉行完成,然后才停止程序。

        承接式调用

     

     

    import threading,time
    
    class MyThreading(threading.Thread):
        '''定义一个线程类'''
        def __init__(self,num):                       #初始化子类
            super(MyThreading,self).__init__()        #由于是继承父类threading.Thread,要重写父类,没有继承参数super(子类,self).__init__(继承父类参数)
            self.num = num
    
        def run(self):
            print("The lucky num is",self.num)
            time.sleep(2)
            print("使用类启动线程,本局执行在什么时候!")
    
    if __name__ == "__main__":
        start_time1 = time.time()
        t1 = MyThreading(6)
        t2 = MyThreading(9)
        t1.start()
        t2.start()
        end_time1 = time.time()
        run_time1 = end_time1 - start_time1
        print("线程运行时间:",run_time1)
    
        start_time2 = time.time()
        t1.run()
        t2.run()
        end_time2 = time.time()
        run_time2 = end_time2 - start_time2
        print("串行程序执行时间:",run_time2)
    执行结果如下:
    The lucky num is 6
    The lucky num is 9
    线程运行时间: 0.0004470348358154297
    The lucky num is 6
    使用类启动线程,本局执行在什么时候!
    使用类启动线程,本局执行在什么时候!
    使用类启动线程,本局执行在什么时候!
    The lucky num is 9
    使用类启动线程,本局执行在什么时候!
    串行程序执行时间: 4.004571914672852
    

     

        上边程序是用类写的线程,上边线程是接二连三threading里面的类Thread,

        threading.Thread源代码:

    class Thread:
        """A class that represents a thread of control.
    
        This class can be safely subclassed in a limited fashion. There are two ways
        to specify the activity: by passing a callable object to the constructor, or
        by overriding the run() method in a subclass.
    
        """
    
        _initialized = False
        # Need to store a reference to sys.exc_info for printing
        # out exceptions when a thread tries to use a global var. during interp.
        # shutdown and thus raises an exception about trying to perform some
        # operation on/with a NoneType
        _exc_info = _sys.exc_info
        # Keep sys.exc_clear too to clear the exception just before
        # allowing .join() to return.
        #XXX __exc_clear = _sys.exc_clear
    
        def __init__(self, group=None, target=None, name=None,
                     args=(), kwargs=None, *, daemon=None):
            """This constructor should always be called with keyword arguments. Arguments are:
    
            *group* should be None; reserved for future extension when a ThreadGroup
            class is implemented.
    
            *target* is the callable object to be invoked by the run()
            method. Defaults to None, meaning nothing is called.
    
            *name* is the thread name. By default, a unique name is constructed of
            the form "Thread-N" where N is a small decimal number.
    
            *args* is the argument tuple for the target invocation. Defaults to ().
    
            *kwargs* is a dictionary of keyword arguments for the target
            invocation. Defaults to {}.
    
            If a subclass overrides the constructor, it must make sure to invoke
            the base class constructor (Thread.__init__()) before doing anything
            else to the thread.
    
            """
            assert group is None, "group argument must be None for now"
            if kwargs is None:
                kwargs = {}
            self._target = target
            self._name = str(name or _newname())
            self._args = args
            self._kwargs = kwargs
            if daemon is not None:
                self._daemonic = daemon
            else:
                self._daemonic = current_thread().daemon
            self._ident = None
            self._tstate_lock = None
            self._started = Event()
            self._is_stopped = False
            self._initialized = True
            # sys.stderr is not stored in the class like
            # sys.exc_info since it can be changed between instances
            self._stderr = _sys.stderr
            # For debugging and _after_fork()
            _dangling.add(self)
    
        def _reset_internal_locks(self, is_alive):
            # private!  Called by _after_fork() to reset our internal locks as
            # they may be in an invalid state leading to a deadlock or crash.
            self._started._reset_internal_locks()
            if is_alive:
                self._set_tstate_lock()
            else:
                # The thread isn't alive after fork: it doesn't have a tstate
                # anymore.
                self._is_stopped = True
                self._tstate_lock = None
    
        def __repr__(self):
            assert self._initialized, "Thread.__init__() was not called"
            status = "initial"
            if self._started.is_set():
                status = "started"
            self.is_alive() # easy way to get ._is_stopped set when appropriate
            if self._is_stopped:
                status = "stopped"
            if self._daemonic:
                status  = " daemon"
            if self._ident is not None:
                status  = " %s" % self._ident
            return "<%s(%s, %s)>" % (self.__class__.__name__, self._name, status)
    
        def start(self):
            """Start the thread's activity.
    
            It must be called at most once per thread object. It arranges for the
            object's run() method to be invoked in a separate thread of control.
    
            This method will raise a RuntimeError if called more than once on the
            same thread object.
    
            """
            if not self._initialized:
                raise RuntimeError("thread.__init__() not called")
    
            if self._started.is_set():
                raise RuntimeError("threads can only be started once")
            with _active_limbo_lock:
                _limbo[self] = self
            try:
                _start_new_thread(self._bootstrap, ())
            except Exception:
                with _active_limbo_lock:
                    del _limbo[self]
                raise
            self._started.wait()
    
        def run(self):
            """Method representing the thread's activity.
    
            You may override this method in a subclass. The standard run() method
            invokes the callable object passed to the object's constructor as the
            target argument, if any, with sequential and keyword arguments taken
            from the args and kwargs arguments, respectively.
    
            """
            try:
                if self._target:
                    self._target(*self._args, **self._kwargs)
            finally:
                # Avoid a refcycle if the thread is running a function with
                # an argument that has a member that points to the thread.
                del self._target, self._args, self._kwargs
    
        def _bootstrap(self):
            # Wrapper around the real bootstrap code that ignores
            # exceptions during interpreter cleanup.  Those typically
            # happen when a daemon thread wakes up at an unfortunate
            # moment, finds the world around it destroyed, and raises some
            # random exception *** while trying to report the exception in
            # _bootstrap_inner() below ***.  Those random exceptions
            # don't help anybody, and they confuse users, so we suppress
            # them.  We suppress them only when it appears that the world
            # indeed has already been destroyed, so that exceptions in
            # _bootstrap_inner() during normal business hours are properly
            # reported.  Also, we only suppress them for daemonic threads;
            # if a non-daemonic encounters this, something else is wrong.
            try:
                self._bootstrap_inner()
            except:
                if self._daemonic and _sys is None:
                    return
                raise
    
        def _set_ident(self):
            self._ident = get_ident()
    
        def _set_tstate_lock(self):
            """
            Set a lock object which will be released by the interpreter when
            the underlying thread state (see pystate.h) gets deleted.
            """
            self._tstate_lock = _set_sentinel()
            self._tstate_lock.acquire()
    
        def _bootstrap_inner(self):
            try:
                self._set_ident()
                self._set_tstate_lock()
                self._started.set()
                with _active_limbo_lock:
                    _active[self._ident] = self
                    del _limbo[self]
    
                if _trace_hook:
                    _sys.settrace(_trace_hook)
                if _profile_hook:
                    _sys.setprofile(_profile_hook)
    
                try:
                    self.run()
                except SystemExit:
                    pass
                except:
                    # If sys.stderr is no more (most likely from interpreter
                    # shutdown) use self._stderr.  Otherwise still use sys (as in
                    # _sys) in case sys.stderr was redefined since the creation of
                    # self.
                    if _sys and _sys.stderr is not None:
                        print("Exception in thread %s:n%s" %
                              (self.name, _format_exc()), file=_sys.stderr)
                    elif self._stderr is not None:
                        # Do the best job possible w/o a huge amt. of code to
                        # approximate a traceback (code ideas from
                        # Lib/traceback.py)
                        exc_type, exc_value, exc_tb = self._exc_info()
                        try:
                            print((
                                "Exception in thread "   self.name  
                                " (most likely raised during interpreter shutdown):"), file=self._stderr)
                            print((
                                "Traceback (most recent call last):"), file=self._stderr)
                            while exc_tb:
                                print((
                                    '  File "%s", line %s, in %s' %
                                    (exc_tb.tb_frame.f_code.co_filename,
                                        exc_tb.tb_lineno,
                                        exc_tb.tb_frame.f_code.co_name)), file=self._stderr)
                                exc_tb = exc_tb.tb_next
                            print(("%s: %s" % (exc_type, exc_value)), file=self._stderr)
                        # Make sure that exc_tb gets deleted since it is a memory
                        # hog; deleting everything else is just for thoroughness
                        finally:
                            del exc_type, exc_value, exc_tb
                finally:
                    # Prevent a race in
                    # test_threading.test_no_refcycle_through_target when
                    # the exception keeps the target alive past when we
                    # assert that it's dead.
                    #XXX self._exc_clear()
                    pass
            finally:
                with _active_limbo_lock:
                    try:
                        # We don't call self._delete() because it also
                        # grabs _active_limbo_lock.
                        del _active[get_ident()]
                    except:
                        pass
    
        def _stop(self):
            # After calling ._stop(), .is_alive() returns False and .join() returns
            # immediately.  ._tstate_lock must be released before calling ._stop().
            #
            # Normal case:  C code at the end of the thread's life
            # (release_sentinel in _threadmodule.c) releases ._tstate_lock, and
            # that's detected by our ._wait_for_tstate_lock(), called by .join()
            # and .is_alive().  Any number of threads _may_ call ._stop()
            # simultaneously (for example, if multiple threads are blocked in
            # .join() calls), and they're not serialized.  That's harmless -
            # they'll just make redundant rebindings of ._is_stopped and
            # ._tstate_lock.  Obscure:  we rebind ._tstate_lock last so that the
            # "assert self._is_stopped" in ._wait_for_tstate_lock() always works
            # (the assert is executed only if ._tstate_lock is None).
            #
            # Special case:  _main_thread releases ._tstate_lock via this
            # module's _shutdown() function.
            lock = self._tstate_lock
            if lock is not None:
                assert not lock.locked()
            self._is_stopped = True
            self._tstate_lock = None
    
        def _delete(self):
            "Remove current thread from the dict of currently running threads."
    
            # Notes about running with _dummy_thread:
            #
            # Must take care to not raise an exception if _dummy_thread is being
            # used (and thus this module is being used as an instance of
            # dummy_threading).  _dummy_thread.get_ident() always returns -1 since
            # there is only one thread if _dummy_thread is being used.  Thus
            # len(_active) is always <= 1 here, and any Thread instance created
            # overwrites the (if any) thread currently registered in _active.
            #
            # An instance of _MainThread is always created by 'threading'.  This
            # gets overwritten the instant an instance of Thread is created; both
            # threads return -1 from _dummy_thread.get_ident() and thus have the
            # same key in the dict.  So when the _MainThread instance created by
            # 'threading' tries to clean itself up when atexit calls this method
            # it gets a KeyError if another Thread instance was created.
            #
            # This all means that KeyError from trying to delete something from
            # _active if dummy_threading is being used is a red herring.  But
            # since it isn't if dummy_threading is *not* being used then don't
            # hide the exception.
    
            try:
                with _active_limbo_lock:
                    del _active[get_ident()]
                    # There must not be any python code between the previous line
                    # and after the lock is released.  Otherwise a tracing function
                    # could try to acquire the lock again in the same thread, (in
                    # current_thread()), and would block.
            except KeyError:
                if 'dummy_threading' not in _sys.modules:
                    raise
    
        def join(self, timeout=None):
            """Wait until the thread terminates.
    
            This blocks the calling thread until the thread whose join() method is
            called terminates -- either normally or through an unhandled exception
            or until the optional timeout occurs.
    
            When the timeout argument is present and not None, it should be a
            floating point number specifying a timeout for the operation in seconds
            (or fractions thereof). As join() always returns None, you must call
            isAlive() after join() to decide whether a timeout happened -- if the
            thread is still alive, the join() call timed out.
    
            When the timeout argument is not present or None, the operation will
            block until the thread terminates.
    
            A thread can be join()ed many times.
    
            join() raises a RuntimeError if an attempt is made to join the current
            thread as that would cause a deadlock. It is also an error to join() a
            thread before it has been started and attempts to do so raises the same
            exception.
    
            """
            if not self._initialized:
                raise RuntimeError("Thread.__init__() not called")
            if not self._started.is_set():
                raise RuntimeError("cannot join thread before it is started")
            if self is current_thread():
                raise RuntimeError("cannot join current thread")
    
            if timeout is None:
                self._wait_for_tstate_lock()
            else:
                # the behavior of a negative timeout isn't documented, but
                # historically .join(timeout=x) for x<0 has acted as if timeout=0
                self._wait_for_tstate_lock(timeout=max(timeout, 0))
    
        def _wait_for_tstate_lock(self, block=True, timeout=-1):
            # Issue #18808: wait for the thread state to be gone.
            # At the end of the thread's life, after all knowledge of the thread
            # is removed from C data structures, C code releases our _tstate_lock.
            # This method passes its arguments to _tstate_lock.acquire().
            # If the lock is acquired, the C code is done, and self._stop() is
            # called.  That sets ._is_stopped to True, and ._tstate_lock to None.
            lock = self._tstate_lock
            if lock is None:  # already determined that the C code is done
                assert self._is_stopped
            elif lock.acquire(block, timeout):
                lock.release()
                self._stop()
    
        @property
        def name(self):
            """A string used for identification purposes only.
    
            It has no semantics. Multiple threads may be given the same name. The
            initial name is set by the constructor.
    
            """
            assert self._initialized, "Thread.__init__() not called"
            return self._name
    
        @name.setter
        def name(self, name):
            assert self._initialized, "Thread.__init__() not called"
            self._name = str(name)
    
        @property
        def ident(self):
            """Thread identifier of this thread or None if it has not been started.
    
            This is a nonzero integer. See the thread.get_ident() function. Thread
            identifiers may be recycled when a thread exits and another thread is
            created. The identifier is available even after the thread has exited.
    
            """
            assert self._initialized, "Thread.__init__() not called"
            return self._ident
    
        def is_alive(self):
            """Return whether the thread is alive.
    
            This method returns True just before the run() method starts until just
            after the run() method terminates. The module function enumerate()
            returns a list of all alive threads.
    
            """
            assert self._initialized, "Thread.__init__() not called"
            if self._is_stopped or not self._started.is_set():
                return False
            self._wait_for_tstate_lock(False)
            return not self._is_stopped
    
        isAlive = is_alive
    
        @property
        def daemon(self):
            """A boolean value indicating whether this thread is a daemon thread.
    
            This must be set before start() is called, otherwise RuntimeError is
            raised. Its initial value is inherited from the creating thread; the
            main thread is not a daemon thread and therefore all threads created in
            the main thread default to daemon = False.
    
            The entire Python program exits when no alive non-daemon threads are
            left.
    
            """
            assert self._initialized, "Thread.__init__() not called"
            return self._daemonic
    
        @daemon.setter
        def daemon(self, daemonic):
            if not self._initialized:
                raise RuntimeError("Thread.__init__() not called")
            if self._started.is_set():
                raise RuntimeError("cannot set daemon status of active thread")
            self._daemonic = daemonic
    
        def isDaemon(self):
            return self.daemon
    
        def setDaemon(self, daemonic):
            self.daemon = daemonic
    
        def getName(self):
            return self.name
    
        def setName(self, name):
            self.name = name
    

        线程里面,能够获取线程名字,getName(),也能够活动设置线程名setName(),暗中同意情况下线程名字是:Thread-1,Thread-2;

        下边来看八个实例:

    import threading,time
    
    def func(num):
        print("The lucky num is ",num)
        time.sleep(2)
        print("线程休眠了!")
    
    
    if __name__ == "__main__":
        start_time = time.time()
        for i in range(10):
            t1 = threading.Thread(target=func,args=("thread_%s" %i,))
            t1.start()
        end_time = time.time()
    
        print("------------------all thread is running done-----------------------")
        run_time = end_time-start_time
        print("33[34;1m程序运行时间:33[0m",run_time)
    

        上边的代码实行结果如下:

    The lucky num is  thread_0
    The lucky num is  thread_1
    The lucky num is  thread_2
    The lucky num is  thread_3
    The lucky num is  thread_4
    The lucky num is  thread_5
    The lucky num is  thread_6
    The lucky num is  thread_7
    The lucky num is  thread_8
    The lucky num is  thread_9
    ------------------all thread is running done-----------------------
    程序运行时间: 0.002081155776977539
    线程休眠了!
    线程休眠了!
    线程休眠了!
    线程休眠了!
    线程休眠了!
    线程休眠了!
    线程休眠了!
    线程休眠了!
    线程休眠了!
    线程休眠了!
    

        上面,程序运转时间干什么唯有0.00282秒,为啥不是2秒?下边来做细致的深入分析:

        首先二个程序至少有叁个线程,程序本人正是主线程,主线程运维子线程,主线程是独立的,子线程也是独立的,两个之间是相互的,主线程和子线程相互独立,是互相的,各自施行各自的,主线程依旧继续向下进行,子线程也在单独实践。程序本身正是线程。

        下边,咱们经过列表,让种种线程自行试行完成:

    import threading,time
    
    def func(num):
        print("The lucky num is ",num)
        time.sleep(2)
        print("线程休眠了!")
    
    
    if __name__ == "__main__":
        start_time = time.time()
        lists = []
        for i in range(10):
            t = threading.Thread(target=func,args=("thread_%s" %i,))
            t.start()
            lists.append(t)
        for w in lists:
            w.join()                                #join()是让程序执行完毕,我们遍历,让每个线程自行执行完毕
    
        end_time = time.time()
    
        print("------------------all thread is running done-----------------------")
        run_time = end_time-start_time
        print("33[34;1m程序运行时间:33[0m",run_time)
    程序执行如下:
    The lucky num is  thread_0
    The lucky num is  thread_1
    The lucky num is  thread_2
    The lucky num is  thread_3
    The lucky num is  thread_4
    The lucky num is  thread_5
    The lucky num is  thread_6
    The lucky num is  thread_7
    The lucky num is  thread_8
    The lucky num is  thread_9
    线程休眠了!
    线程休眠了!
    线程休眠了!
    线程休眠了!
    线程休眠了!
    线程休眠了!
    线程休眠了!
    线程休眠了!
    线程休眠了!
    线程休眠了!
    ------------------all thread is running done-----------------------
    程序运行时间: 2.0065605640411377
    

        上边程序中,大家参预了一个列表,让各样线程运转之后,放入一个列表中,然后遍历列表,让种种线程都试行达成再举行上边包车型大巴顺序。

        能够见见,全数线程实践完结费用的总时间是:2.0065605640411377,那便是持有线程执行的年月。创立一时列表,让程序实践之后,各类线程各自实践,不影响别的线程,不然正是串行的。

        join()解释:"""Wait until the thread terminates.等待线程终止(甘休)

        下边程序中,大家运营了拾一个线程,那么首先个运行的线程是不是是主线程呢?不是的,主线程是程序自己,大家运行程序的时候,程序是由上而下施行的,本人正是三个线程,那几个线程正是主线程,也即程序本人,下边咱们来验证一下:

     

    import threading,time
    
    def func(num):
        print("The lucky num is ",num)
        time.sleep(2)
        print("线程休眠了!,什么线程?",threading.current_thread())
    
    
    if __name__ == "__main__":
        start_time = time.time()
        lists = []
        for i in range(10):
            t = threading.Thread(target=func,args=("thread_%s" %i,))
            t.start()
            lists.append(t)
        print("33[31m运行的线程数:%s33[0m" % threading.active_count())
        for w in lists:
            w.join()                                #join()是让程序执行完毕,我们遍历,让每个线程自行执行完毕
    
        end_time = time.time()
    
        print("------------------all thread is running done-----------------------",threading.current_thread())
        print("当前运行的线程数:",threading.active_count())
        run_time = end_time-start_time
        print("33[34;1m程序运行时间:33[0m",run_time)
    

     

        下面程序中,我们投入了注脚当前线程是还是不是是主线程,在函数和主程序里面大家都插手了验证,并且在线程未终止和终止后插足了总括线程运维的个数,程序运转结果如下:

    The lucky num is  thread_0
    The lucky num is  thread_1
    The lucky num is  thread_2
    The lucky num is  thread_3
    The lucky num is  thread_4
    The lucky num is  thread_5
    The lucky num is  thread_6
    The lucky num is  thread_7
    The lucky num is  thread_8
    The lucky num is  thread_9
    运行的线程数:11
    线程休眠了!,什么线程? <Thread(Thread-2, started 140013432059648)>
    线程休眠了!,什么线程? <Thread(Thread-1, started 140013440452352)>
    线程休眠了!,什么线程? <Thread(Thread-3, started 140013423666944)>
    线程休眠了!,什么线程? <Thread(Thread-4, started 140013415274240)>
    线程休眠了!,什么线程? <Thread(Thread-10, started 140013022988032)>
    线程休眠了!,什么线程? <Thread(Thread-7, started 140013048166144)>
    线程休眠了!,什么线程? <Thread(Thread-5, started 140013406881536)>
    线程休眠了!,什么线程? <Thread(Thread-6, started 140013398488832)>
    线程休眠了!,什么线程? <Thread(Thread-8, started 140013039773440)>
    线程休眠了!,什么线程? <Thread(Thread-9, started 140013031380736)>
    ------------------all thread is running done----------------------- <_MainThread(MainThread, started 140013466183424)>
    当前运行的线程数: 1
    程序运行时间: 2.0047178268432617
    

        从地点程序的运作结果能够看来,在十一个线程运营后,程序是由12个线程在运作,并且运维的线程只是仅仅的线程(Thread),而上面线程执行完结之后,运维的才是主线程<MainThread>;由此能够见到,程序本人才是主线程,运营程序本身,就开启了三个线程,当运行的线程甘休后,就能够自行终止运作,被杀掉,那一点和Windows有一些区别,在Windows下面,线程依然在激活中。

        threading.current_thread()是翻开当前线程是或不是是主线程,threading.active_count()计算当前运作线程的个数。

        守护线程:主线程截至之后,其余线程都甘休运行,不管其余线程是或不是试行完成。协助管理能源。

        大家清楚,如若没有join()主线程会一向奉行下去,不管其余线程是还是不是施行实现,不过最后都在等候别的线程试行实现之后才结束主线程。把线程转变为照料线程,那么主程序就不会管守护线程是或不是推行完结,只需让其它线程施行达成就能够。

        上边大家把线程设置为照管线程,如下:

    import threading,time
    
    def func(num):
        print("The lucky num is ",num)
        time.sleep(2)
        print("线程休眠了!,什么线程?",threading.current_thread())
    
    
    if __name__ == "__main__":
        start_time = time.time()
        lists = []
        for i in range(10):
            t = threading.Thread(target=func,args=("thread_%s" %i,))
            t.setDaemon(True)    #Daemon:守护进程,把线程设置为守护线程
            t.start()
            lists.append(t)
        print("33[31m运行的线程数:%s33[0m" % threading.active_count())
        print("当前执行线程:%s" %threading.current_thread())
        # for w in lists:
        #     w.join()                                #join()是让程序执行完毕,我们遍历,让每个线程自行执行完毕
    
        end_time = time.time()
    
        print("------------------all thread is running done-----------------------",threading.current_thread())
        print("当前运行的线程数:",threading.active_count())
        run_time = end_time-start_time
        print("33[34;1m程序运行时间:33[0m",run_time)
    

        下边程序中,大家运转了13个线程,并将其安装为守护线程,setDaemon(True),上边大家来看看程序的执市价况:

    The lucky num is  thread_0
    The lucky num is  thread_1
    The lucky num is  thread_2
    The lucky num is  thread_3
    The lucky num is  thread_4
    The lucky num is  thread_5
    The lucky num is  thread_6
    The lucky num is  thread_7
    The lucky num is  thread_8
    The lucky num is  thread_9
    运行的线程数:11
    当前执行线程:<_MainThread(MainThread, started 140558033020672)>
    ------------------all thread is running done----------------------- <_MainThread(MainThread, started 140558033020672)>
    当前运行的线程数: 11
    程序运行时间: 0.0032095909118652344
    

        从程序的施行结果可以看来,当大家把运营的线程设置为护理线程之后,由于碰着IO操作,在医护线程等待的长河中,主程序已经实践实现了,由于是医生和医护人员线程,非亲非故首要,程序甘休,不管其是还是不是试行完毕,能够看出,当棉被服装置为照看线程之后,就融洽在系统中运作,假如在主程序试行完成从前实践完成,则会打字与印刷结果,不然主线程关闭,守护线程一同关闭。

        setDaemon():是把前段时间线程设置为护理线程。要在t.start()线程运行此前。

        GIL(全局解释器锁)四核机器可以而且做4件职业,单核永世是串行的,四核CPU统有的时候间实在正正就有四件业务在推行,不过在Python中,无论是4核,8核,统不常间试行的线程都只有一个,那是Python开荒时候的二个欠缺,都是单核。Python总括的时候,Python解释器调用的是C语言的接口,只好等待接口再次来到的结果,无法说了算C语言的线程。统有的时候间只有八个线程基本上能用,修改数据。其余语言都是温馨写的线程。Python是调用C语言的线程。

        线程锁(互斥锁Mutex)

        贰个进程下能够运转多少个线程,七个线程共享父进度的内部存款和储蓄器空间,也就象征种种线程可以访问同一份数据,此时,如若2个线程同偶尔间要修改同一份数据,会冒出哪些情况?

        符合规律来说,那些num结果应当是0,但在python2.7上多运行三回,会开采,最后打字与印刷出来的num结果不总是0,为啥每一趟运转结果不平等吧?哈哈,很简短,假若您有A,B三个线程,此时都要对num举办减1操作,由于2个线程是出新同不时间运维的,所以2个线程很有异常的大可能率同期拿走了num=100那些开头变量交给CPU去运算,当A线程去管理完毕果是99,但那时B线程运算完的结果也是99,多个线程同有的时候间CPU运算的结果赋值给num变量后,结果就都以99。那么怎么办呢?很轻易,每一个线程在要修改公共数据时,为了幸免自个儿在还没改完的时候外人也来修改此数据,能够给这么些数额加一把锁,那样任何线程想修改此数额时就必须等待你修改完结并把锁释放之后能力再拜访此数量。

        注:不要在3.x上运营,不知缘何,3.x上的结果一而再不错的,或许是活动加了锁。

     

        线程之间是足以相互联系的,未来下边来看一个例子,全体的线程来修改同一份数据,如下:

     

    import threading,time
    
    def func(n):
        global num
        time.sleep(0.8)                            #sleep()是不占用CPU的CPU会执行其他的
        num  = 1                                   #所有的线程共同修改num数据
    
    if __name__ == "__main__":
        num = 0
        lists = []
        for i in range(1000):
            t = threading.Thread(target=func,args=("thread_%s" %i,))
            # t.setDaemon(True)    #Daemon:守护进程,把线程设置为守护线程
            t.start()
            lists.append(t)
        print("33[31m运行的线程数:%s33[0m" % threading.active_count())
        for w in lists:
            w.join()                                #join()是让程序执行完毕,我们遍历,让每个线程自行执行完毕
    
        print("------------------all thread is running done-----------------------")
        print("当前运行的线程数:",threading.active_count())
    
        print("num:",num)                           #所有的线程共同修改一个数据
    

     

        上边程序中,全部线程都会操作num,让num数量加1,正常结果便是一千,运维结果如下:

    运行的线程数:1001
    ------------------all thread is running done-----------------------
    当前运行的线程数: 1
    num: 1000
    

        运营结果也是一千,不过在开始时期版本中,日常会现出结果不是一千,而是999等看似的数,有个别系统运作总是会并发,在Python3中不会有标题,为啥会产出这种情状呢?

        解释器同一时候只放行三个线程运转,申请python解释器锁,实行时间到了,未有实施实现,由于线程奉行是由时光分配,要是实行时间到了,就释放全局解释器锁(gil lock),出现的缘故正是由于投机从没实行完成,将在自由gil lock,未有重临;使此线程尽管举行了,不过未有实行实现,别的线程得到的开首值依然未有改换的早先值。

    新葡亰496net 1

     

        怎样消除那一个标题呢?要拓展加锁,全局解释器(GIL LOCK)自身会加锁和假释锁;我们也融洽给程序加锁,释放锁,让程序推行的时候,唯有这几个线程在实践总括,不会因为Python的GIL LOCK释放,而先后尚未推行实现,出现计量错误;大家相依为命加锁正是让线程试行完结之后在自由锁。让别的线程调用。如下:

     

    import threading,time
    
    def func(n):
        lock.acquire()                             #给线程解锁,让此线程执行完毕
        global num
        # time.sleep(0)                            #sleep()是不占用CPU的CPU会执行其他的
        num  = 1                                   #所有的线程共同修改num数据
        lock.release()
    
    if __name__ == "__main__":
        lock = threading.Lock()                    #声明一个锁的变量
        num = 0
        lists = []
        for i in range(10):
            t = threading.Thread(target=func,args=("thread_%s" %i,))
            # t.setDaemon(True)    #Daemon:守护进程,把线程设置为守护线程
            t.start()
            lists.append(t)
        print("33[31m运行的线程数:%s33[0m" % threading.active_count())
        for w in lists:
            w.join()                                #join()是让程序执行完毕,我们遍历,让每个线程自行执行完毕
    
        print("------------------all thread is running done-----------------------")
        print("当前运行的线程数:",threading.active_count())
    
        print("num:",num)                           #所有的线程共同修改一个数据
    

     

         上边程序中,我们率先评释了一把锁,lock=threading.Lock(),然后在施行线程中加锁,lock.acquire(),最终获释lock.release(),假若加锁的话,一定要切记,程序推行时间比较端,由于自由锁外人手艺利用,等于让程序编程串行的了,由此,里面不可能有IO操作,不可能会实行不快,加锁让程序效能必然会变慢,可是保障了数量的准头。加锁是让此番线程实行实现才获释,因而之后此次释放才会实践下一遍线程。

        上边程序中,程序本人实行的时候,GIL LOCK会在系统申请锁,大家相濡以沫给程序也加了锁。

        递归锁:假设加锁过去,会让程序不知底怎么释放,锁死程序,由此要动用递归锁,程序如下:

    import threading
    '''自己写一个递归所的实例'''
    
    def run1(num):
        lock.acquire()
        num  = 1
        lock.release()
        return num
    
    def run2(num):
        lock.acquire()
        num  = 2
        lock.release()
        return num
    
    def run3(x,y):
        lock.acquire()
        """执行run1"""
        res1 = run1(x)                                         #调用run1,run1里面也加锁了,是run3下面的锁
        '''执行run2'''
        res2 = run2(y)                                         #调用run2,run2里面也加锁了,是run3下面的锁,与run1平行,没有上下级关系
        lock.release()
        print("res1:",res1,"res2:",res2)
    
    if __name__ == "__main__":
        lock = threading.Lock()
        for i in range(10):
            t = threading.Thread(target=run3,args=(1,1,))       #对run3函数加锁
            t.start()
        while threading.active_count() != 1:                    #判断活跃线程个数,当其他线程都执行完毕,只剩主线程时,就是1
            print("33[31m活跃的线程个数:%s33[0m" %threading.active_count())
        else:
            print("All the threading task done!!!")
    

        上面,大家写了多个函数,函数run3中调用run1和run2,run3里面加锁,并且run1和run2也加锁了,run1和run2是run3上面包车型大巴锁,run1和run2是平行锁,两个不存在上下级关系,以后我们来奉行顺序,看是怎么的结果,如下:

    活跃的线程个数:11
    活跃的线程个数:11
    活跃的线程个数:11
    活跃的线程个数:11
    活跃的线程个数:11
    活跃的线程个数:11
    活跃的线程个数:11
    活跃的线程个数:11
    ......
    

        从地点施行结果能够看看,并从未举行运营的13个线程,由于每层都加锁,导致程序识别锁混乱,怎么样结果吧?要动用到递归锁,何为递归所吗,正是给本人加上旗号。

    import threading
    '''写一个递归锁'''
    
    def run1():
        lock.acquire()     #加锁
        global num1
        num1  = 1
        lock.release()
        return num1
    
    def run2():
        '''加锁'''
        lock.acquire()
        global num2
        num2  = 2
        lock.release()
        return num2
    
    def run3():
        lock.acquire()
        res1 = run1()
        '''执行第二个调用'''
        res2 = run2()
        lock.release()
        print(res1,res2)
    
    if __name__ == "__main__":
        num1,num2 =1,2
        lock = threading.RLock()
        for i in range(10):
            t = threading.Thread(target=run3)
            t.start()
    
    while threading.active_count() != 1:
        print("33[31m当前活跃的线程个数:%s33[0m" %threading.active_count())
    else:
        print("All the thread has task done!!!!")
        print(num1,num2)
    

         下面代码中,我们开始展览了改造,使用了递归锁,即有鲜明的讲话,递归:recursion,那样,就一下子就解决了了难点,如下:

    2 4
    3 6
    4 8
    5 10
    6 12
    7 14
    8 16
    9 18
    10 20
    11 22
    当前活跃的线程个数:2
    All the thread has task done!!!!
    11 22
    

        上边程序中,结果能够科学的周转,并且嵌套锁未有出错,是因为使用了递归锁LacrosseLock(),从上边程序中,小编也简要领悟了全局变量的应用,在函数中什么修改全局变量,首先定义二个全局变量,然后修改就能够。

        Semaphore(信号量)

        互斥锁 同一时候只允许贰个线程改动数据,而Semaphore是还要同意一定数量的线程改造数据 ,比如厕全部3个坑,那最三只同意3个人上洗手间,前边的人只可以等中间有人出来了手艺再进入。

        互斥锁:调控线程同有毛病候施行的多寡,大家能够运行八个线程,可是大家得以显明联合时间让几个线程实行,当有线程施行达成之后,增加新的线程进去,直至全体线程实行完结。

    import threading,time
    '''写一个递归锁'''
    
    def run1():
        global num1
        num1  = 1
        return num1
    
    def run2():
        global num2
        num2  = 2
        return num2
    
    def run3():
        semaphore.acquire()
        res1 = run1()
        '''执行第二个调用'''
        res2 = run2()
        semaphore.release()
        time.sleep(2)
        print(res1,res2)
    
    if __name__ == "__main__":
        num1,num2 =1,2
        lock = threading.RLock()
        semaphore = threading.BoundedSemaphore(5)
        for i in range(10):
            t = threading.Thread(target=run3)
            t.start()
    
    while threading.active_count() != 1:
        print("33[31m当前活跃的线程个数:%s33[0m" %threading.active_count())
    else:
        print("All the thread has task done!!!!")
        print(num1,num2)
    

        上边程序选择了实信号量,即统不常间只允许5个线程推行,即使起步了10个线程;Bounded:绑定的;Semaphore:时域信号量,BondedSemaphore:绑定的时限信号量,即同期允许运行的线程数,上边程序的运作代码如下:

    当前活跃的线程个数:11
    当前活跃的线程个数:11
    当前活跃的线程个数:11
    当前活跃的线程个数:11
    当前活跃的线程个数:11
    当前活跃的线程个数:11
    当前活跃的线程个数:11
    3 6
    当前活跃的线程个数:11
    当前活跃的线程个数:11
    当前活跃的线程个数:11
    当前活跃的线程个数:11
    当前活跃的线程个数:11
    当前活跃的线程个数:11
    当前活跃的线程个数:11
    当前活跃的线程个数:11
    当前活跃的线程个数:11
    当前活跃的线程个数:11
    当前活跃的线程个数:11
    4 8
    当前活跃的线程个数:9
    当前活跃的线程个数:9
    当前活跃的线程个数:9
    当前活跃的线程个数:9
    当前活跃的线程个数:9
    当前活跃的线程个数:9
    当前活跃的线程个数:9
    当前活跃的线程个数:9
    当前活跃的线程个数:9
    当前活跃的线程个数:9
    当前活跃的线程个数:9
    当前活跃的线程个数:9
    当前活跃的线程个数:9
    当前活跃的线程个数:9
    当前活跃的线程个数:9
    当前活跃的线程个数:9
    当前活跃的线程个数:9
    当前活跃的线程个数:9
    当前活跃的线程个数:9
    当前活跃的线程个数:9
    6 12
    5 10
    7 14
    2 4
    当前活跃的线程个数:5
    8 16
    当前活跃的线程个数:4
    当前活跃的线程个数:4
    当前活跃的线程个数:4
    当前活跃的线程个数:4
    当前活跃的线程个数:4
    当前活跃的线程个数:4
    当前活跃的线程个数:4
    当前活跃的线程个数:4
    当前活跃的线程个数:4
    当前活跃的线程个数:4
    当前活跃的线程个数:4
    当前活跃的线程个数:4
    当前活跃的线程个数:4
    当前活跃的线程个数:4
    当前活跃的线程个数:4
    当前活跃的线程个数:4
    当前活跃的线程个数:4
    当前活跃的线程个数:4
    当前活跃的线程个数:4
    当前活跃的线程个数:4
    当前活跃的线程个数:4
    当前活跃的线程个数:4
    当前活跃的线程个数:4
    当前活跃的线程个数:4
    当前活跃的线程个数:4
    当前活跃的线程个数:4
    当前活跃的线程个数:4
    当前活跃的线程个数:4
    当前活跃的线程个数:4
    当前活跃的线程个数:4
    当前活跃的线程个数:4
    当前活跃的线程个数:4
    当前活跃的线程个数:4
    当前活跃的线程个数:4
    当前活跃的线程个数:4
    当前活跃的线程个数:4
    当前活跃的线程个数:4
    当前活跃的线程个数:4
    当前活跃的线程个数:4
    当前活跃的线程个数:4
    当前活跃的线程个数:4
    当前活跃的线程个数:4
    当前活跃的线程个数:4
    当前活跃的线程个数:4
    当前活跃的线程个数:4
    11 22
    当前活跃的线程个数:3
    当前活跃的线程个数:3
    当前活跃的线程个数:3
    10 20
    当前活跃的线程个数:2
    当前活跃的线程个数:2
    当前活跃的线程个数:2
    当前活跃的线程个数:2
    当前活跃的线程个数:2
    当前活跃的线程个数:2
    当前活跃的线程个数:2
    当前活跃的线程个数:2
    当前活跃的线程个数:2
    当前活跃的线程个数:2
    当前活跃的线程个数:2
    当前活跃的线程个数:2
    当前活跃的线程个数:2
    当前活跃的线程个数:2
    当前活跃的线程个数:2
    当前活跃的线程个数:2
    当前活跃的线程个数:2
    当前活跃的线程个数:2
    当前活跃的线程个数:2
    当前活跃的线程个数:2
    当前活跃的线程个数:2
    当前活跃的线程个数:2
    当前活跃的线程个数:2
    当前活跃的线程个数:2
    当前活跃的线程个数:2
    当前活跃的线程个数:2
    当前活跃的线程个数:2
    当前活跃的线程个数:2
    当前活跃的线程个数:2
    9 18
    All the thread has task done!!!!
    11 22
    

        从结果能够见到,施行是分批次施行的,同期只会有5个线程同有的时候候推行,当有线程推行达成,会补充新的线程进来。

        Events(事件)

        An event is a simple synchronization object:事件是二个简约的一头对象;

        the event represents an internal flag, and threads can wait for the flag to be set, or set or clear the flag themselves。(

    该事件代表贰个里面标识,线程能够等待标识设置,或安装或免除标识本人。)

        event = threading.Event()   #生命四个时刻

        event.wait()                #二个客户端线程能够等待标识棉被服装置(a client thread can wait for the flag to be set),检查测试标记位

        event.set()                 #服务器线程能够安装或重新设置它(a server thread can set or reset it)

     

        event.clear()               #理解标识位

        If the flag is set, the wait method doesn’t do anything.(假诺设置了标识,则等待方法不实行别的操作。)

        If the flag is cleared, wait will block until it becomes set again.(如若标记位已掌握,等待将封堵,直到它再也设置)。

        Any number of threads may wait for the same event.(任何数据的线程能够等待同一事件)

        下边来看二个红绿灯的次第,能够转变红绿灯以便车辆通行,当红灯的时候,车的线程等待,当绿灯的时候,车辆通行,正是七个线程交互的图景,使用的是事件(event),如下:

     

    import threading,time
    
    def traffic_lights():
        counter = 0
        while True:
            if counter < 30:
                print("33[42m即将转为绿灯,准备通行!!!33[0m")
                event.set()                          #一分钟为一个轮回,30秒以内为绿灯
                print("33[32m绿灯,通行......33[0m")
            elif counter >= 30 and counter <= 60:
                print("33[41m即将转为红灯,请等待!!!33[0m")
                event.clear()                        #清楚标志,转为红灯
                print("33[31m红灯中,禁止通行......33[0m")
            elif counter > 60:
                counter = 0                          #超过60秒重新计数,重新下一次循环
            counter  = 1
            time.sleep(1)                            #一秒一秒的运行
    
    def car(name):
        '''定义车的线程,汽车就检测是否有红绿灯,通行和等待'''
        while True:
            if event.is_set():                       #存在标识位,说明是绿灯
                '''检测,如果存在标志位,说明是绿灯中,车可以通行'''
                print("[%s] is running!!!" %name)
            else:
                '''标识位不存在,说明是红灯过程中'''
                print("[%s] is waitting!!!" %name)
            time.sleep(1)
    
    if __name__ == "__main__":
        try:
            event = threading.Event()
            lighter = threading.Thread(target=traffic_lights)
            lighter.start()
            '''启动多个车的线程'''
            for i in range(1):
                my_car = threading.Thread(target=car,args=("tesla",))
                my_car.start()
        except KeyboardInterrupt as e:
            print("线程断开了!!!")
    
        except Exception as e:
            print("线程断开了!!!")
    

     

        上边程序实行如下:

    即将转为绿灯,准备通行!!!
    绿灯,通行......
    [tesla] is running!!!
    即将转为绿灯,准备通行!!!
    绿灯,通行......
    [tesla] is running!!!
    即将转为绿灯,准备通行!!!
    [tesla] is running!!!
    绿灯,通行......
    [tesla] is running!!!
    即将转为绿灯,准备通行!!!
    绿灯,通行......
    即将转为绿灯,准备通行!!!
    [tesla] is running!!!
    绿灯,通行......
    [tesla] is running!!!
    即将转为绿灯,准备通行!!!
    绿灯,通行......
    即将转为绿灯,准备通行!!!
    [tesla] is running!!!
    绿灯,通行......
    [tesla] is running!!!
    即将转为绿灯,准备通行!!!
    绿灯,通行......
    即将转为绿灯,准备通行!!!
    绿灯,通行......
    [tesla] is running!!!
    即将转为绿灯,准备通行!!!
    绿灯,通行......
    [tesla] is running!!!
    即将转为绿灯,准备通行!!!
    绿灯,通行......
    [tesla] is running!!!
    即将转为绿灯,准备通行!!!
    绿灯,通行......
    [tesla] is running!!!
    即将转为绿灯,准备通行!!!
    绿灯,通行......
    [tesla] is running!!!
    即将转为绿灯,准备通行!!!
    绿灯,通行......
    [tesla] is running!!!
    即将转为绿灯,准备通行!!!
    绿灯,通行......
    [tesla] is running!!!
    即将转为绿灯,准备通行!!!
    绿灯,通行......
    [tesla] is running!!!
    即将转为绿灯,准备通行!!!
    绿灯,通行......
    [tesla] is running!!!
    即将转为绿灯,准备通行!!!
    [tesla] is running!!!
    绿灯,通行......
    [tesla] is running!!!
    即将转为绿灯,准备通行!!!
    绿灯,通行......
    即将转为绿灯,准备通行!!!
    绿灯,通行......
    [tesla] is running!!!
    即将转为绿灯,准备通行!!!
    [tesla] is running!!!
    绿灯,通行......
    [tesla] is running!!!
    即将转为绿灯,准备通行!!!
    绿灯,通行......
    [tesla] is running!!!
    即将转为绿灯,准备通行!!!
    绿灯,通行......
    [tesla] is running!!!
    即将转为绿灯,准备通行!!!
    绿灯,通行......
    [tesla] is running!!!
    即将转为绿灯,准备通行!!!
    绿灯,通行......
    [tesla] is running!!!
    即将转为绿灯,准备通行!!!
    绿灯,通行......
    即将转为绿灯,准备通行!!!
    绿灯,通行......
    [tesla] is running!!!
    [tesla] is running!!!
    即将转为绿灯,准备通行!!!
    绿灯,通行......
    即将转为绿灯,准备通行!!!
    绿灯,通行......
    [tesla] is running!!!
    即将转为绿灯,准备通行!!!
    绿灯,通行......
    [tesla] is running!!!
    即将转为红灯,请等待!!!
    [tesla] is running!!!
    红灯中,禁止通行......
    [tesla] is waitting!!!
    即将转为红灯,请等待!!!
    红灯中,禁止通行......
    即将转为红灯,请等待!!!
    [tesla] is waitting!!!
    红灯中,禁止通行......
    [tesla] is waitting!!!
    即将转为红灯,请等待!!!
    红灯中,禁止通行......
    [tesla] is waitting!!!
    即将转为红灯,请等待!!!
    红灯中,禁止通行......
    [tesla] is waitting!!!
    即将转为红灯,请等待!!!
    红灯中,禁止通行......
    即将转为红灯,请等待!!!
    红灯中,禁止通行......
    [tesla] is waitting!!!
    

        上边,大家定义了五个线程,并且完毕了互动,使用的是事件,event.set():设置事件标志符,代表实施;event.clear():清除标志符,代表等待,只有当新的标识符被安装,才会通行。

    import threading,time
    
    def traffic_lights():
        '''设置红绿灯,会显示事件,以及由绿——黄——红、红———黄——绿的转换'''
        global counter                                                           #计时器
        counter = 0
        while True:
            if counter < 40:                                                     #绿灯通行中
                event.set()
                '''绿灯中,可以通行'''
                print("33[42mThe light is on green light,runing!!!33[0m")
                print("剩余通行时间:%s" %(40-counter))
            elif counter >40 and counter <= 43:
                event.clear()
                '''黄灯中,是由绿灯转为红灯的'''
                print("Yellow light is on,waitting!!!即将转为红灯!")
            elif counter > 43 and counter <= 63:
                '''红灯,由黄灯转换为红灯'''
                print("33[41mThe red light is on!!! Waitting33[0m")
                print("剩余红灯时间:%s" %(63-counter))
            elif counter > 63 and counter <= 66:
                '''由红灯转换为红灯,即将转为绿灯'''
                print("The yewwlow is on,Waitting!!!即将转为红灯!!")
            elif counter > 66:
                counter = 0
            counter  = 1
            time.sleep(1)
    
    def go_through(name):
        '''通行线程,根据上面红绿灯判断是否通行'''
        while True:
            if event.is_set():
                """绿灯,可以通行"""
                print("[%s] is running!!!" %name)
            else:
                print("%s is waitting!!!" %name)
            time.sleep(1)
    
    if __name__ == "__main__":
        event = threading.Event()
        lights = threading.Thread(target=traffic_lights)
        lights.start()
    
        car = threading.Thread(target=go_through,args=("tesla",))
        car.start()
    

        上边程序中,大家落到实处了时间提醒,跟实际世界的红绿灯很相似,并且由绿--黄--红至红--黄--绿,达成往返的转变,如下所示:

    The light is on green light,runing!!!
    剩余通行时间:40
    [tesla] is running!!!
    The light is on green light,runing!!!
    剩余通行时间:39
    [tesla] is running!!!
    The light is on green light,runing!!!
    剩余通行时间:38
    [tesla] is running!!!
    [tesla] is running!!!
    The light is on green light,runing!!!
    剩余通行时间:37
    [tesla] is running!!!
    The light is on green light,runing!!!
    剩余通行时间:36
    [tesla] is running!!!
    The light is on green light,runing!!!
    剩余通行时间:35
    [tesla] is running!!!
    The light is on green light,runing!!!
    剩余通行时间:34
    [tesla] is running!!!
    The light is on green light,runing!!!
    剩余通行时间:33
    [tesla] is running!!!
    The light is on green light,runing!!!
    剩余通行时间:32
    [tesla] is running!!!
    The light is on green light,runing!!!
    剩余通行时间:31
    [tesla] is running!!!
    The light is on green light,runing!!!
    剩余通行时间:30
    [tesla] is running!!!
    The light is on green light,runing!!!
    剩余通行时间:29
    [tesla] is running!!!
    The light is on green light,runing!!!
    剩余通行时间:28
    [tesla] is running!!!
    The light is on green light,runing!!!
    剩余通行时间:27
    [tesla] is running!!!
    The light is on green light,runing!!!
    剩余通行时间:26
    The light is on green light,runing!!!
    剩余通行时间:25
    [tesla] is running!!!
    [tesla] is running!!!
    The light is on green light,runing!!!
    剩余通行时间:24
    The light is on green light,runing!!!
    [tesla] is running!!!
    剩余通行时间:23
    [tesla] is running!!!
    The light is on green light,runing!!!
    剩余通行时间:22
    The light is on green light,runing!!!
    剩余通行时间:21
    [tesla] is running!!!
    The light is on green light,runing!!!
    剩余通行时间:20
    [tesla] is running!!!
    [tesla] is running!!!
    The light is on green light,runing!!!
    剩余通行时间:19
    [tesla] is running!!!
    The light is on green light,runing!!!
    剩余通行时间:18
    [tesla] is running!!!
    The light is on green light,runing!!!
    剩余通行时间:17
    The light is on green light,runing!!!
    剩余通行时间:16
    [tesla] is running!!!
    The light is on green light,runing!!!
    剩余通行时间:15
    [tesla] is running!!!
    The light is on green light,runing!!!
    剩余通行时间:14
    [tesla] is running!!!
    The light is on green light,runing!!!
    剩余通行时间:13
    [tesla] is running!!!
    The light is on green light,runing!!!
    剩余通行时间:12
    [tesla] is running!!!
    The light is on green light,runing!!!
    剩余通行时间:11
    [tesla] is running!!!
    The light is on green light,runing!!!
    剩余通行时间:10
    [tesla] is running!!!
    [tesla] is running!!!
    The light is on green light,runing!!!
    剩余通行时间:9
    The light is on green light,runing!!!
    剩余通行时间:8
    [tesla] is running!!!
    The light is on green light,runing!!!
    剩余通行时间:7
    [tesla] is running!!!
    The light is on green light,runing!!!
    剩余通行时间:6
    [tesla] is running!!!
    [tesla] is running!!!
    The light is on green light,runing!!!
    剩余通行时间:5
    [tesla] is running!!!
    The light is on green light,runing!!!
    剩余通行时间:4
    [tesla] is running!!!
    The light is on green light,runing!!!
    剩余通行时间:3
    [tesla] is running!!!
    The light is on green light,runing!!!
    剩余通行时间:2
    The light is on green light,runing!!!
    剩余通行时间:1
    [tesla] is running!!!
    [tesla] is running!!!
    [tesla] is running!!!
    Yellow light is on,waitting!!!即将转为红灯!
    tesla is waitting!!!
    Yellow light is on,waitting!!!即将转为红灯!
    Yellow light is on,waitting!!!即将转为红灯!
    tesla is waitting!!!
    The red light is on!!! Waitting
    剩余红灯时间:19
    tesla is waitting!!!
    The red light is on!!! Waitting
    tesla is waitting!!!
    剩余红灯时间:18
    The red light is on!!! Waitting
    tesla is waitting!!!
    剩余红灯时间:17
    The red light is on!!! Waitting
    剩余红灯时间:16
    tesla is waitting!!!
    The red light is on!!! Waitting
    tesla is waitting!!!
    剩余红灯时间:15
    tesla is waitting!!!
    The red light is on!!! Waitting
    剩余红灯时间:14
    The red light is on!!! Waitting
    剩余红灯时间:13
    tesla is waitting!!!
    tesla is waitting!!!
    The red light is on!!! Waitting
    剩余红灯时间:12
    tesla is waitting!!!
    The red light is on!!! Waitting
    剩余红灯时间:11
    tesla is waitting!!!
    The red light is on!!! Waitting
    剩余红灯时间:10
    The red light is on!!! Waitting
    剩余红灯时间:9
    tesla is waitting!!!
    The red light is on!!! Waitting
    tesla is waitting!!!
    剩余红灯时间:8
    tesla is waitting!!!
    The red light is on!!! Waitting
    剩余红灯时间:7
    The red light is on!!! Waitting
    tesla is waitting!!!
    剩余红灯时间:6
    The red light is on!!! Waitting
    tesla is waitting!!!
    剩余红灯时间:5
    tesla is waitting!!!
    The red light is on!!! Waitting
    剩余红灯时间:4
    The red light is on!!! Waitting
    tesla is waitting!!!
    剩余红灯时间:3
    The red light is on!!! Waitting
    tesla is waitting!!!
    剩余红灯时间:2
    tesla is waitting!!!
    The red light is on!!! Waitting
    剩余红灯时间:1
    The red light is on!!! Waitting
    剩余红灯时间:0
    tesla is waitting!!!
    tesla is waitting!!!
    The yewwlow is on,Waitting!!!即将转为红灯!!
    tesla is waitting!!!
    The yewwlow is on,Waitting!!!即将转为红灯!!
    tesla is waitting!!!
    The yewwlow is on,Waitting!!!即将转为红灯!!
    tesla is waitting!!!
    tesla is waitting!!!
    The light is on green light,runing!!!
    剩余通行时间:39
    [tesla] is running!!!
    The light is on green light,runing!!!
    剩余通行时间:38
    [tesla] is running!!!
    

        下面程序中,大家贯彻了红绿灯的交替,即时间的安装与撤除,依据八个状态来决断,唯有设置的时候,绿灯才通行,撤废的时候,都以伺机。

    本文由新葡亰496net发布于奥门新萄京娱乐场,转载请注明出处:多线程与多进程,进程和线程

    关键词:

上一篇:新葡亰496net:Python逻辑运算符,条件采纳

下一篇:没有了