Answers:
您可以使用glob:
import glob, os
os.chdir("/mydir")
for file in glob.glob("*.txt"):
    print(file)
或简单地os.listdir:
import os
for file in os.listdir("/mydir"):
    if file.endswith(".txt"):
        print(os.path.join("/mydir", file))
或者如果要遍历目录,请使用os.walk:
import os
for root, dirs, files in os.walk("/mydir"):
    for file in files:
        if file.endswith(".txt"):
             print(os.path.join(root, file))
              for file in f比写起来更合适,for files in f因为变量中只有一个文件名。更妙的是改变f对files,然后for循环可能会成为for file in files。
                    file不是保留字,只是预定义函数的名称,因此很有可能在您自己的代码中将其用作变量名。尽管通常应该避免类似的冲突,但这file是一种特殊情况,因为几乎不需要使用它,因此通常将其视为准则的例外。如果您不想这样做,PEP8建议在此类名称后附加一个下划线,即file_您必须同意的名称仍很可读。
                    使用glob。
>>> import glob
>>> glob.glob('./*.txt')
['./outline.txt', './pip-log.txt', './test.txt', './testingvim.txt']
              这样的事情应该做的
for root, dirs, files in os.walk(directory):
    for file in files:
        if file.endswith('.txt'):
            print file
              root, dirs, files而不是r, d, f。更具可读性。
                    这样的事情会起作用:
>>> import os
>>> path = '/usr/share/cups/charmaps'
>>> text_files = [f for f in os.listdir(path) if f.endswith('.txt')]
>>> text_files
['euc-cn.txt', 'euc-jp.txt', 'euc-kr.txt', 'euc-tw.txt', ... 'windows-950.txt']
              os.path.join在的每个元素上使用text_files。可能是这样的text_files = [os.path.join(path, f) for f in os.listdir(path) if f.endswith('.txt')]。
                    import pathlib
list(pathlib.Path('your_directory').glob('*.txt'))
或循环:
for txt_file in pathlib.Path('your_directory').glob('*.txt'):
    # do something with "txt_file"
如果您希望递归可以使用 .glob('**/*.txt)
1该pathlib模块包含在python 3.4的标准库中。但是,即使在较旧的Python版本(例如,使用conda或pip)上,您也可以安装该模块的后端口:pathlib和pathlib2。
**/*.txt不支持较早的python版本。因此,我使用以下方法解决了此问题:	foundfiles=	subprocess.check_output("ls **/*.txt", shell=True) 	for foundfile in foundfiles.splitlines(): 		print foundfile
                    pathlib可以做什么,我已经包含了Python版本要求。:)但是,如果您的方法尚未发布,为什么不将其添加为另一个答案呢?
                    rglob如果要递归查找项目,也可以使用。EG.rglob('*.txt')
                    我喜欢os.walk():
import os
for root, dirs, files in os.walk(dir):
    for f in files:
        if os.path.splitext(f)[1] == '.txt':
            fullpath = os.path.join(root, f)
            print(fullpath)
或使用发电机:
import os
fileiter = (os.path.join(root, f)
    for root, _, files in os.walk(dir)
    for f in files)
txtfileiter = (f for f in fileiter if os.path.splitext(f)[1] == '.txt')
for txt in txtfileiter:
    print(txt)
              以下是相同版本的更多版本,它们会产生稍微不同的结果:
import glob
for f in glob.iglob("/mydir/*/*.txt"): # generator, search immediate subdirectories 
    print f
print glob.glob1("/mydir", "*.tx?")  # literal_directory, basename_pattern
import fnmatch, os
print fnmatch.filter(os.listdir("/mydir"), "*.tx?") # include dot-files
              glob1()是,该glob模块中的帮助程序函数未在Python文档中列出。有一些内联注释描述了源文件中的功能,请参阅.../Lib/glob.py。
                    glob.glob1()不是公开的,但在Python 2.4-2.7; 3.0-3.2;上可用 pypy; jython github.com/zed/test_glob1
                    glob模块中提取出来。
                    path.py是另一种替代方法:https : //github.com/jaraco/path.py
from path import path
p = path('/path/to/the/directory')
for f in p.files(pattern='*.txt'):
    print f
              for f in p.walk(pattern='*.txt')遍历所有子文件夹
                    list(p.glob('**/*.py'))
                    在递归函数中使用os.scandir的快速方法。在文件夹和子文件夹中搜索具有指定扩展名的所有文件。
import os
def findFilesInFolder(path, pathList, extension, subFolders = True):
    """  Recursive function to find all files of an extension type in a folder (and optionally in all subfolders too)
    path:        Base directory to find files
    pathList:    A list that stores all paths
    extension:   File extension to find
    subFolders:  Bool.  If True, find files in all subfolders under path. If False, only searches files in the specified folder
    """
    try:   # Trapping a OSError:  File permissions problem I believe
        for entry in os.scandir(path):
            if entry.is_file() and entry.path.endswith(extension):
                pathList.append(entry.path)
            elif entry.is_dir() and subFolders:   # if its a directory, then repeat process as a nested function
                pathList = findFilesInFolder(entry.path, pathList, extension, subFolders)
    except OSError:
        print('Cannot access ' + path +'. Probably a permissions error')
    return pathList
dir_name = r'J:\myDirectory'
extension = ".txt"
pathList = []
pathList = findFilesInFolder(dir_name, pathList, extension, True)
如果要搜索包含10,000s个文件的目录,则附加到列表的效率将降低。“屈服”结果是一个更好的解决方案。我还提供了一个将输出转换为Pandas Dataframe的功能。
import os
import re
import pandas as pd
import numpy as np
def findFilesInFolderYield(path,  extension, containsTxt='', subFolders = True, excludeText = ''):
    """  Recursive function to find all files of an extension type in a folder (and optionally in all subfolders too)
    path:               Base directory to find files
    extension:          File extension to find.  e.g. 'txt'.  Regular expression. Or  'ls\d' to match ls1, ls2, ls3 etc
    containsTxt:        List of Strings, only finds file if it contains this text.  Ignore if '' (or blank)
    subFolders:         Bool.  If True, find files in all subfolders under path. If False, only searches files in the specified folder
    excludeText:        Text string.  Ignore if ''. Will exclude if text string is in path.
    """
    if type(containsTxt) == str: # if a string and not in a list
        containsTxt = [containsTxt]
    myregexobj = re.compile('\.' + extension + '$')    # Makes sure the file extension is at the end and is preceded by a .
    try:   # Trapping a OSError or FileNotFoundError:  File permissions problem I believe
        for entry in os.scandir(path):
            if entry.is_file() and myregexobj.search(entry.path): # 
                bools = [True for txt in containsTxt if txt in entry.path and (excludeText == '' or excludeText not in entry.path)]
                if len(bools)== len(containsTxt):
                    yield entry.stat().st_size, entry.stat().st_atime_ns, entry.stat().st_mtime_ns, entry.stat().st_ctime_ns, entry.path
            elif entry.is_dir() and subFolders:   # if its a directory, then repeat process as a nested function
                yield from findFilesInFolderYield(entry.path,  extension, containsTxt, subFolders)
    except OSError as ose:
        print('Cannot access ' + path +'. Probably a permissions error ', ose)
    except FileNotFoundError as fnf:
        print(path +' not found ', fnf)
def findFilesInFolderYieldandGetDf(path,  extension, containsTxt, subFolders = True, excludeText = ''):
    """  Converts returned data from findFilesInFolderYield and creates and Pandas Dataframe.
    Recursive function to find all files of an extension type in a folder (and optionally in all subfolders too)
    path:               Base directory to find files
    extension:          File extension to find.  e.g. 'txt'.  Regular expression. Or  'ls\d' to match ls1, ls2, ls3 etc
    containsTxt:        List of Strings, only finds file if it contains this text.  Ignore if '' (or blank)
    subFolders:         Bool.  If True, find files in all subfolders under path. If False, only searches files in the specified folder
    excludeText:        Text string.  Ignore if ''. Will exclude if text string is in path.
    """
    fileSizes, accessTimes, modificationTimes, creationTimes , paths  = zip(*findFilesInFolderYield(path,  extension, containsTxt, subFolders))
    df = pd.DataFrame({
            'FLS_File_Size':fileSizes,
            'FLS_File_Access_Date':accessTimes,
            'FLS_File_Modification_Date':np.array(modificationTimes).astype('timedelta64[ns]'),
            'FLS_File_Creation_Date':creationTimes,
            'FLS_File_PathName':paths,
                  })
    df['FLS_File_Modification_Date'] = pd.to_datetime(df['FLS_File_Modification_Date'],infer_datetime_format=True)
    df['FLS_File_Creation_Date'] = pd.to_datetime(df['FLS_File_Creation_Date'],infer_datetime_format=True)
    df['FLS_File_Access_Date'] = pd.to_datetime(df['FLS_File_Access_Date'],infer_datetime_format=True)
    return df
ext =   'txt'  # regular expression 
containsTxt=[]
path = 'C:\myFolder'
df = findFilesInFolderYieldandGetDf(path,  ext, containsTxt, subFolders = True)
              试试这个,这将递归找到所有文件:
import glob, os
os.chdir("H:\\wallpaper")# use whatever directory you want
#double\\ no single \
for file in glob.glob("**/*.txt", recursive = True):
    print(file)
              **。仅在python 3中可用。我不喜欢的chdir部分。没必要。
                    filepath = os.path.join('wallpaper')然后将其用作glob.glob(filepath+"**/*.psd", recursive = True),这将产生相同的结果。
                    我做了一个测试(Python 3.6.4,W7x64),看哪个解决方案对于一个文件夹(没有子目录)最快,以获得具有特定扩展名的文件的完整文件路径列表。
要长话短说,这个任务os.listdir()是最快的是1.7倍的速度作为下一个最好的:os.walk()(!休息后),2.7倍一样快pathlib,3.2倍的速度比os.scandir()和3.3倍的速度比glob。
请记住,当您需要递归结果时,这些结果将改变。如果您复制/粘贴以下一种方法,请添加.lower(),否则在搜索.ext时找不到.EXT。
import os
import pathlib
import timeit
import glob
def a():
    path = pathlib.Path().cwd()
    list_sqlite_files = [str(f) for f in path.glob("*.sqlite")]
def b(): 
    path = os.getcwd()
    list_sqlite_files = [f.path for f in os.scandir(path) if os.path.splitext(f)[1] == ".sqlite"]
def c():
    path = os.getcwd()
    list_sqlite_files = [os.path.join(path, f) for f in os.listdir(path) if f.endswith(".sqlite")]
def d():
    path = os.getcwd()
    os.chdir(path)
    list_sqlite_files = [os.path.join(path, f) for f in glob.glob("*.sqlite")]
def e():
    path = os.getcwd()
    list_sqlite_files = [os.path.join(path, f) for f in glob.glob1(str(path), "*.sqlite")]
def f():
    path = os.getcwd()
    list_sqlite_files = []
    for root, dirs, files in os.walk(path):
        for file in files:
            if file.endswith(".sqlite"):
                list_sqlite_files.append( os.path.join(root, file) )
        break
print(timeit.timeit(a, number=1000))
print(timeit.timeit(b, number=1000))
print(timeit.timeit(c, number=1000))
print(timeit.timeit(d, number=1000))
print(timeit.timeit(e, number=1000))
print(timeit.timeit(f, number=1000))
结果:
# Python 3.6.4
0.431
0.515
0.161
0.548
0.537
0.274
              此代码使我的生活更简单。
import os
fnames = ([file for root, dirs, files in os.walk(dir)
    for file in files
    if file.endswith('.txt') #or file.endswith('.png') or file.endswith('.pdf')
    ])
for fname in fnames: print(fname)
              使用fnmatch:https : //docs.python.org/2/library/fnmatch.html
import fnmatch
import os
for file in os.listdir('.'):
    if fnmatch.fnmatch(file, '*.txt'):
        print file
              为了从同一目录中名为“ data”的文件夹中获取“ .txt”文件名的数组,我通常使用以下简单代码行:
import os
fileNames = [fileName for fileName in os.listdir("data") if fileName.endswith(".txt")]
              带有子目录的功能解决方案:
from fnmatch import filter
from functools import partial
from itertools import chain
from os import path, walk
print(*chain(*(map(partial(path.join, root), filter(filenames, "*.txt")) for root, _, filenames in walk("mydir"))))
              如果文件夹包含很多文件或内存是一个限制,请考虑使用生成器:
def yield_files_with_extensions(folder_path, file_extension):
   for _, _, files in os.walk(folder_path):
       for file in files:
           if file.endswith(file_extension):
               yield file
选项A:重复
for f in yield_files_with_extensions('.', '.txt'): 
    print(f)
选项B:全部获取
files = [f for f in yield_files_with_extensions('.', '.txt')]
              可复制的解决方案,类似于ghostdog之一:
def get_all_filepaths(root_path, ext):
    """
    Search all files which have a given extension within root_path.
    This ignores the case of the extension and searches subdirectories, too.
    Parameters
    ----------
    root_path : str
    ext : str
    Returns
    -------
    list of str
    Examples
    --------
    >>> get_all_filepaths('/run', '.lock')
    ['/run/unattended-upgrades.lock',
     '/run/mlocate.daily.lock',
     '/run/xtables.lock',
     '/run/mysqld/mysqld.sock.lock',
     '/run/postgresql/.s.PGSQL.5432.lock',
     '/run/network/.ifstate.lock',
     '/run/lock/asound.state.lock']
    """
    import os
    all_files = []
    for root, dirs, files in os.walk(root_path):
        for filename in files:
            if filename.lower().endswith(ext):
                all_files.append(os.path.join(root, filename))
    return all_files
              使用Python OS模块查找具有特定扩展名的文件。
简单的例子在这里:
import os
# This is the path where you want to search
path = r'd:'  
# this is extension you want to detect
extension = '.txt'   # this can be : .jpg  .png  .xls  .log .....
for root, dirs_list, files_list in os.walk(path):
    for file_name in files_list:
        if os.path.splitext(file_name)[-1] == extension:
            file_name_path = os.path.join(root, file_name)
            print file_name
            print file_name_path   # This is the full path of the filter file
              许多用户回答了os.walk答案,其中包括所有文件,还包括所有目录和子目录及其文件。
import os
def files_in_dir(path, extension=''):
    """
       Generator: yields all of the files in <path> ending with
       <extension>
       \param   path       Absolute or relative path to inspect,
       \param   extension  [optional] Only yield files matching this,
       \yield              [filenames]
    """
    for _, dirs, files in os.walk(path):
        dirs[:] = []  # do not recurse directories.
        yield from [f for f in files if f.endswith(extension)]
# Example: print all the .py files in './python'
for filename in files_in_dir('./python', '*.py'):
    print("-", filename)
或者在不需要发电机的情况下停下来:
path, ext = "./python", ext = ".py"
for _, _, dirfiles in os.walk(path):
    matches = (f for f in dirfiles if f.endswith(ext))
    break
for filename in matches:
    print("-", filename)
如果要将匹配用于其他内容,则可能要使其成为列表而不是生成器表达式:
    matches = [f for f in dirfiles if f.endswith(ext)]
              使用forloop的简单方法:
import os
dir = ["e","x","e"]
p = os.listdir('E:')  #path
for n in range(len(p)):
   name = p[n]
   myfile = [name[-3],name[-2],name[-1]]  #for .txt
   if myfile == dir :
      print(name)
   else:
      print("nops")
虽然这可以使它更加笼统。