string.split()
返回列表实例。是否有返回生成器的版本?是否有任何理由禁止使用生成器版本?
split
是字符串,然后返回处理的生成器split
。这让我开始思考是否有一种方法可以split
使发电机退回。
string.split()
返回列表实例。是否有返回生成器的版本?是否有任何理由禁止使用生成器版本?
split
是字符串,然后返回处理的生成器split
。这让我开始思考是否有一种方法可以split
使发电机退回。
Answers:
re.finditer
使用相当少的内存开销的可能性很大。
def split_iter(string):
return (x.group(0) for x in re.finditer(r"[A-Za-z']+", string))
演示:
>>> list( split_iter("A programmer's RegEx test.") )
['A', "programmer's", 'RegEx', 'test']
编辑:我刚刚确认,假设我的测试方法正确,这将在python 3.2.1中占用不变的内存。我创建了一个非常大的字符串(大约1GB),然后使用for
循环遍历了可迭代对象(没有列表理解,这会产生额外的内存)。这不会导致内存的显着增长(也就是说,如果内存增长,则远远少于1GB字符串)。
a_string.split("delimiter")
?
str.split()
不接受正则表达式,那是re.split()
您在想...
我可以想到的最有效的方法是使用方法的offset
参数编写一个str.find()
。这样可以避免大量的内存使用,并在不需要时依靠正则表达式的开销。
[编辑2016-8-2:已对此进行更新,可以选择支持正则表达式分隔符]
def isplit(source, sep=None, regex=False):
"""
generator version of str.split()
:param source:
source string (unicode or bytes)
:param sep:
separator to split on.
:param regex:
if True, will treat sep as regular expression.
:returns:
generator yielding elements of string.
"""
if sep is None:
# mimic default python behavior
source = source.strip()
sep = "\\s+"
if isinstance(source, bytes):
sep = sep.encode("ascii")
regex = True
if regex:
# version using re.finditer()
if not hasattr(sep, "finditer"):
sep = re.compile(sep)
start = 0
for m in sep.finditer(source):
idx = m.start()
assert idx >= start
yield source[start:idx]
start = m.end()
yield source[start:]
else:
# version using str.find(), less overhead than re.finditer()
sepsize = len(sep)
start = 0
while True:
idx = source.find(sep, start)
if idx == -1:
yield source[start:]
return
yield source[start:idx]
start = idx + sepsize
可以根据需要使用...
>>> print list(isplit("abcb","b"))
['a','c','']
每次执行find()或切片时,在字符串中都需要花费一点成本,但这应该是最小的,因为字符串被表示为内存中的连续数组。
这是split()
通过实现的生成器版本,re.search()
不存在分配太多子字符串的问题。
import re
def itersplit(s, sep=None):
exp = re.compile(r'\s+' if sep is None else re.escape(sep))
pos = 0
while True:
m = exp.search(s, pos)
if not m:
if pos < len(s) or sep is not None:
yield s[pos:]
break
if pos < m.start() or sep is not None:
yield s[pos:m.start()]
pos = m.end()
sample1 = "Good evening, world!"
sample2 = " Good evening, world! "
sample3 = "brackets][all][][over][here"
sample4 = "][brackets][all][][over][here]["
assert list(itersplit(sample1)) == sample1.split()
assert list(itersplit(sample2)) == sample2.split()
assert list(itersplit(sample3, '][')) == sample3.split('][')
assert list(itersplit(sample4, '][')) == sample4.split('][')
编辑:如果没有给出分隔符,则纠正了周围空白的处理。
re.finditer
呢?
对提出的各种方法进行了性能测试(我在这里不再重复)。一些结果:
str.split
(默认= 0.3461570239996945re.finditer
(ninjagecko的答案)= 0.698872097000276str.find
(Eli Collins的答案之一)= 0.7230395330007013itertools.takewhile
(伊格纳西奥·巴斯克斯(Ignacio Vazquez-Abrams)的答案)= 2.023023967998597str.split(..., maxsplit=1)
递归= N / A†† 鉴于s的速度,递归答案(string.split
带有maxsplit = 1
)未能在合理的时间内完成string.split
,但它们可能在较短的字符串上可以更好地工作,但是后来我看不到内存不成问题的短字符串的用例。
使用timeit
以下测试:
the_text = "100 " * 9999 + "100"
def test_function( method ):
def fn( ):
total = 0
for x in method( the_text ):
total += int( x )
return total
return fn
这就提出了另一个问题,即为什么string.split
尽管使用了内存却速度如此之快。
这是我的实现,比这里的其他答案要快得多,更完整。它具有针对不同情况的4个单独的子功能。
我将只复制main str_split
函数的文档字符串:
str_split(s, *delims, empty=None)
s
用其余的参数分割字符串,可能省略空白部分(empty
关键字参数负责)。这是一个生成器功能。
如果仅提供一个定界符,则字符串将被它简单分割。
empty
然后True
默认情况下。
str_split('[]aaa[][]bb[c', '[]')
-> '', 'aaa', '', 'bb[c'
str_split('[]aaa[][]bb[c', '[]', empty=False)
-> 'aaa', 'bb[c'
如果提供了多个定界符,则默认情况下,该字符串将按这些定界符的最长可能序列进行拆分,或者,如果empty
将其设置为
True
,则还包括定界符之间的空字符串。注意,在这种情况下,分隔符只能是单个字符。
str_split('aaa, bb : c;', ' ', ',', ':', ';')
-> 'aaa', 'bb', 'c'
str_split('aaa, bb : c;', *' ,:;', empty=True)
-> 'aaa', '', 'bb', '', '', 'c', ''
如果未提供定界符,string.whitespace
则使用,因此效果与相同str.split()
,不同之处在于此函数是一个生成器。
str_split('aaa\\t bb c \\n')
-> 'aaa', 'bb', 'c'
import string
def _str_split_chars(s, delims):
"Split the string `s` by characters contained in `delims`, including the \
empty parts between two consecutive delimiters"
start = 0
for i, c in enumerate(s):
if c in delims:
yield s[start:i]
start = i+1
yield s[start:]
def _str_split_chars_ne(s, delims):
"Split the string `s` by longest possible sequences of characters \
contained in `delims`"
start = 0
in_s = False
for i, c in enumerate(s):
if c in delims:
if in_s:
yield s[start:i]
in_s = False
else:
if not in_s:
in_s = True
start = i
if in_s:
yield s[start:]
def _str_split_word(s, delim):
"Split the string `s` by the string `delim`"
dlen = len(delim)
start = 0
try:
while True:
i = s.index(delim, start)
yield s[start:i]
start = i+dlen
except ValueError:
pass
yield s[start:]
def _str_split_word_ne(s, delim):
"Split the string `s` by the string `delim`, not including empty parts \
between two consecutive delimiters"
dlen = len(delim)
start = 0
try:
while True:
i = s.index(delim, start)
if start!=i:
yield s[start:i]
start = i+dlen
except ValueError:
pass
if start<len(s):
yield s[start:]
def str_split(s, *delims, empty=None):
"""\
Split the string `s` by the rest of the arguments, possibly omitting
empty parts (`empty` keyword argument is responsible for that).
This is a generator function.
When only one delimiter is supplied, the string is simply split by it.
`empty` is then `True` by default.
str_split('[]aaa[][]bb[c', '[]')
-> '', 'aaa', '', 'bb[c'
str_split('[]aaa[][]bb[c', '[]', empty=False)
-> 'aaa', 'bb[c'
When multiple delimiters are supplied, the string is split by longest
possible sequences of those delimiters by default, or, if `empty` is set to
`True`, empty strings between the delimiters are also included. Note that
the delimiters in this case may only be single characters.
str_split('aaa, bb : c;', ' ', ',', ':', ';')
-> 'aaa', 'bb', 'c'
str_split('aaa, bb : c;', *' ,:;', empty=True)
-> 'aaa', '', 'bb', '', '', 'c', ''
When no delimiters are supplied, `string.whitespace` is used, so the effect
is the same as `str.split()`, except this function is a generator.
str_split('aaa\\t bb c \\n')
-> 'aaa', 'bb', 'c'
"""
if len(delims)==1:
f = _str_split_word if empty is None or empty else _str_split_word_ne
return f(s, delims[0])
if len(delims)==0:
delims = string.whitespace
delims = set(delims) if len(delims)>=4 else ''.join(delims)
if any(len(d)>1 for d in delims):
raise ValueError("Only 1-character multiple delimiters are supported")
f = _str_split_chars if empty else _str_split_chars_ne
return f(s, delims)
该函数可在Python 3中使用,并且可以通过简单但很难看的修复程序使其在2和3版本中均可使用。该函数的第一行应更改为:
def str_split(s, *delims, **kwargs):
"""...docstring..."""
empty = kwargs.get('empty')
否,但是使用编写一个应该足够容易itertools.takewhile()
。
编辑:
非常简单,半断的实现:
import itertools
import string
def isplitwords(s):
i = iter(s)
while True:
r = []
for c in itertools.takewhile(lambda x: not x in string.whitespace, i):
r.append(c)
else:
if r:
yield ''.join(r)
continue
else:
raise StopIteration()
takeWhile
。使用将predicate
字符串分割成单词(默认split
)有什么好处takeWhile()
?
string.whitespace
。
'abc<def<>ghi<><>lmn'.split('<>') == ['abc<def', 'ghi', '', 'lmn']
我认为的生成器版本没有任何明显的好处split()
。生成器对象将必须包含整个字符串以进行迭代,因此您不必通过生成器来节省任何内存。
如果您想编写一个,那将很容易:
import string
def gsplit(s,sep=string.whitespace):
word = []
for c in s:
if c in sep:
if word:
yield "".join(word)
word = []
else:
word.append(c)
if word:
yield "".join(word)
id()
使我正确。显然,由于字符串是不可变的,因此您无需担心有人在迭代原始字符串时更改原始字符串。
我写了一个@ninjagecko答案的版本,其行为更类似于string.split(即默认情况下用空格定界,您可以指定定界符)。
def isplit(string, delimiter = None):
"""Like string.split but returns an iterator (lazy)
Multiple character delimters are not handled.
"""
if delimiter is None:
# Whitespace delimited by default
delim = r"\s"
elif len(delimiter) != 1:
raise ValueError("Can only handle single character delimiters",
delimiter)
else:
# Escape, incase it's "\", "*" etc.
delim = re.escape(delimiter)
return (x.group(0) for x in re.finditer(r"[^{}]+".format(delim), string))
这是我使用的测试(在python 3和python 2中):
# Wrapper to make it a list
def helper(*args, **kwargs):
return list(isplit(*args, **kwargs))
# Normal delimiters
assert helper("1,2,3", ",") == ["1", "2", "3"]
assert helper("1;2;3,", ";") == ["1", "2", "3,"]
assert helper("1;2 ;3, ", ";") == ["1", "2 ", "3, "]
# Whitespace
assert helper("1 2 3") == ["1", "2", "3"]
assert helper("1\t2\t3") == ["1", "2", "3"]
assert helper("1\t2 \t3") == ["1", "2", "3"]
assert helper("1\n2\n3") == ["1", "2", "3"]
# Surrounding whitespace dropped
assert helper(" 1 2 3 ") == ["1", "2", "3"]
# Regex special characters
assert helper(r"1\2\3", "\\") == ["1", "2", "3"]
assert helper(r"1*2*3", "*") == ["1", "2", "3"]
# No multi-char delimiters allowed
try:
helper(r"1,.2,.3", ",.")
assert False
except ValueError:
pass
python的regex模块说 unicode空格做了“正确的事”,但我实际上尚未对其进行测试。
也可作为要点。
more_itertools.split_at
str.split
为迭代器提供一个模拟。
>>> import more_itertools as mit
>>> list(mit.split_at("abcdcba", lambda x: x == "b"))
[['a'], ['c', 'd', 'c'], ['a']]
>>> "abcdcba".split("b")
['a', 'cdc', 'a']
more_itertools
是第三方软件包。
我想展示如何使用find_iter解决方案为给定的定界符返回生成器,然后使用itertools中的成对配方来构建先前的下一个迭代,该迭代将获得与原始split方法相同的实际单词。
from more_itertools import pairwise
import re
string = "dasdha hasud hasuid hsuia dhsuai dhasiu dhaui d"
delimiter = " "
# split according to the given delimiter including segments beginning at the beginning and ending at the end
for prev, curr in pairwise(re.finditer("^|[{0}]+|$".format(delimiter), string)):
print(string[prev.end(): curr.start()])
注意:
def split_generator(f,s):
"""
f is a string, s is the substring we split on.
This produces a generator rather than a possibly
memory intensive list.
"""
i=0
j=0
while j<len(f):
if i>=len(f):
yield f[j:]
j=i
elif f[i] != s:
i=i+1
else:
yield [f[j:i]]
j=i+1
i=i+1
[f[j:i]]
,而不是f[j:i]
?