如何使用VideoToolbox解压缩H.264视频流


72

我在弄清楚如何使用Apple的硬件加速视频框架解压缩H.264视频流时遇到很多麻烦。几个星期后,我发现了这个问题,并想分享一个广泛的例子,因为我找不到一个。

我的目标是给出在WWDC '14会话513中引入的Video Toolbox的详尽指导性示例。我的代码无法编译或运行,因为它需要与基本的H.264流(例如从文件读取的视频或从在线流传输的视频)集成在一起,并且需要根据具体情况进行调整。

我应该提到,除了我在研究该主题时学到的知识以外,我对视频编码/解码的经验很少。我不了解有关视频格式,参数结构等的所有详细信息,所以我只包含了我认为您需要知道的内容。

我正在使用XCode 6.2,并已部署到运行iOS 8.1和8.2的iOS设备。


1
可以在以下问题处找到用于无缝循环H264内容的解压缩和重新压缩示例:stackoverflow.com/a/33335884/763355
MoDJ 2016年

Answers:


187

概念:

的NALU: 的NALU是简单地改变具有NALU起始码报头长度的数据块0x00 00 00 01 YY,其中的前5位YY告诉你什么类型的NALU这是因此什么类型的数据在报头。(因为您只需要前5位,所以我YY & 0x1F只用来获取相关位。)我列出了方法中所有这些类型的内容NSString * const naluTypesStrings[],但是您无需知道它们的全部含义。

参数: 解码器需要参数,以便知道如何存储H.264视频数据。您需要设置的2个是序列参数集(SPS)图片参数集(PPS),它们各自具有自己的NALU类型编号。您无需知道参数的含义,解码器也知道如何处理它们。

H.264流格式: 在大多数H.264流中,您将收到一组初始的PPS和SPS参数,后跟一个i帧(又名IDR帧或同位帧)NALU。然后,您将收到几个P帧的NALU(可能几十个左右),然后是另一组参数(可能与初始参数相同)和一个i帧,更多的P帧等。i帧比P帧。从概念上讲,您可以将i帧视为视频的整个图像,P帧只是对该i帧所做的更改,直到收到下一个i帧为止。

程序:

  1. 从您的H.264流中生成单个NALU。 我无法显示此步骤的代码,因为它在很大程度上取决于您所使用的视频源。我制作了此图形以显示我正在使用的图形(以下代码中图形中的“数据”为“框架”),但是您的情况可能并且可能会有所不同。每当我收到属于2种类型之一的frame()时,都会调用我当时的工作我的方法。在该图中,这2种帧类型是2个大紫色框。receivedRawVideoFrame:uint8_t *frame

  2. 使用CMVideoFormatDescriptionCreateFromH264ParameterSets()从您的SPS和PPS NALU中创建一个CMVideoFormatDescriptionRef。您必须先这样做才能显示任何帧。SPS和PPS看起来像是一堆数字,但是VTD知道如何处理它们。所有你需要知道的是,CMVideoFormatDescriptionRef就是视频数据的描述中,相同的宽度/高度,格式类型(kCMPixelFormat_32BGRAkCMVideoCodecType_H264等),宽高比,色彩空间等你的解码器将不放参数,直到一个新的组到达时(有时参数即使没有改变,也会经常感到不满)。

  3. 根据“ AVCC”格式重新包装IDR和非IDR帧NALU。 这意味着删除NALU起始代码,并用一个表示NALU长度的4字节标头替换它们。对于SPS和PPS NALU,您不需要这样做。(请注意,4字节的NALU长度标头是big-endian格式的,因此,如果您有一个UInt32值,则必须在复制到CMBlockBufferusing之前对它进行字节交换CFSwapInt32。我使用htonl函数调用在代码中进行此操作。)

  4. 将IDR和非IDR NALU帧打包到CMBlockBuffer中。请勿使用SPS PPS参数NALUs执行此操作。您需要了解的CMBlockBuffers是它们是一种将任意数据块包装在核心媒体中的方法。(视频管道中的任何压缩视频数据都包含在其中。)

  5. 将CMBlockBuffer打包到CMSampleBuffer中。 您需要知道的CMSampleBuffers是,它们将我们的CMBlockBuffers信息与其他信息一起包装起来(此处将使用CMVideoFormatDescriptionCMTime,如果CMTime使用的话)。

  6. 创建一个VTDecompressionSessionRef并将示例缓冲区馈入VTDecompressionSessionDecodeFrame()。另外,您可以使用AVSampleBufferDisplayLayer及其enqueueSampleBuffer:方法,而无需使用VTDecompSession。设置起来比较简单,但是如果出现像VTD这样的错误,则不会抛出错误。

  7. 在VTDecompSession回调中,使用生成的CVImageBufferRef显示视频帧。 如果需要将其转换CVImageBuffer为a UIImage,请在此处查看我的StackOverflow答案。

其他说明:

  • H.264流的差异很大。据我了解,NALU起始代码标头有时为3个字节0x00 00 01,有时为40x00 00 00 01)。我的代码适用于4个字节;如果您正在使用3,则需要更改一些内容。

  • 如果您想了解有关NALU的更多信息,我发现此答案非常有帮助。就我而言,我发现我并不需要忽略所描述的“防止仿真”字节,因此我个人跳过了这一步,但是您可能需要了解这一点。

  • 如果VTDecompressionSession输出错误号(如-12909),则在XCode项目中查找错误代码。在项目导航器中找到VideoToolbox框架,将其打开并找到标题VTErrors.h。如果您找不到它,那么我还会在另一个答案中包含以下所有错误代码。

代码示例:

因此,让我们从声明一些全局变量开始,并包括VT框架(VT = Video Toolbox)。

#import <VideoToolbox/VideoToolbox.h>

@property (nonatomic, assign) CMVideoFormatDescriptionRef formatDesc;
@property (nonatomic, assign) VTDecompressionSessionRef decompressionSession;
@property (nonatomic, retain) AVSampleBufferDisplayLayer *videoLayer;
@property (nonatomic, assign) int spsSize;
@property (nonatomic, assign) int ppsSize;

仅使用以下数组,以便您可以打印出要接收的NALU帧类型。如果您知道所有这些类型的含义对您有好处,那么您对H.264的了解比我还多:)我的代码仅处理类型1、5、7和8。

NSString * const naluTypesStrings[] =
{
    @"0: Unspecified (non-VCL)",
    @"1: Coded slice of a non-IDR picture (VCL)",    // P frame
    @"2: Coded slice data partition A (VCL)",
    @"3: Coded slice data partition B (VCL)",
    @"4: Coded slice data partition C (VCL)",
    @"5: Coded slice of an IDR picture (VCL)",      // I frame
    @"6: Supplemental enhancement information (SEI) (non-VCL)",
    @"7: Sequence parameter set (non-VCL)",         // SPS parameter
    @"8: Picture parameter set (non-VCL)",          // PPS parameter
    @"9: Access unit delimiter (non-VCL)",
    @"10: End of sequence (non-VCL)",
    @"11: End of stream (non-VCL)",
    @"12: Filler data (non-VCL)",
    @"13: Sequence parameter set extension (non-VCL)",
    @"14: Prefix NAL unit (non-VCL)",
    @"15: Subset sequence parameter set (non-VCL)",
    @"16: Reserved (non-VCL)",
    @"17: Reserved (non-VCL)",
    @"18: Reserved (non-VCL)",
    @"19: Coded slice of an auxiliary coded picture without partitioning (non-VCL)",
    @"20: Coded slice extension (non-VCL)",
    @"21: Coded slice extension for depth view components (non-VCL)",
    @"22: Reserved (non-VCL)",
    @"23: Reserved (non-VCL)",
    @"24: STAP-A Single-time aggregation packet (non-VCL)",
    @"25: STAP-B Single-time aggregation packet (non-VCL)",
    @"26: MTAP16 Multi-time aggregation packet (non-VCL)",
    @"27: MTAP24 Multi-time aggregation packet (non-VCL)",
    @"28: FU-A Fragmentation unit (non-VCL)",
    @"29: FU-B Fragmentation unit (non-VCL)",
    @"30: Unspecified (non-VCL)",
    @"31: Unspecified (non-VCL)",
};

现在这是所有魔术发生的地方。

-(void) receivedRawVideoFrame:(uint8_t *)frame withSize:(uint32_t)frameSize isIFrame:(int)isIFrame
{
    OSStatus status;

    uint8_t *data = NULL;
    uint8_t *pps = NULL;
    uint8_t *sps = NULL;

    // I know what my H.264 data source's NALUs look like so I know start code index is always 0.
    // if you don't know where it starts, you can use a for loop similar to how i find the 2nd and 3rd start codes
    int startCodeIndex = 0;
    int secondStartCodeIndex = 0;
    int thirdStartCodeIndex = 0;

    long blockLength = 0;

    CMSampleBufferRef sampleBuffer = NULL;
    CMBlockBufferRef blockBuffer = NULL;

    int nalu_type = (frame[startCodeIndex + 4] & 0x1F);
    NSLog(@"~~~~~~~ Received NALU Type \"%@\" ~~~~~~~~", naluTypesStrings[nalu_type]);

    // if we havent already set up our format description with our SPS PPS parameters, we
    // can't process any frames except type 7 that has our parameters
    if (nalu_type != 7 && _formatDesc == NULL)
    {
        NSLog(@"Video error: Frame is not an I Frame and format description is null");
        return;
    }

    // NALU type 7 is the SPS parameter NALU
    if (nalu_type == 7)
    {
        // find where the second PPS start code begins, (the 0x00 00 00 01 code)
        // from which we also get the length of the first SPS code
        for (int i = startCodeIndex + 4; i < startCodeIndex + 40; i++)
        {
            if (frame[i] == 0x00 && frame[i+1] == 0x00 && frame[i+2] == 0x00 && frame[i+3] == 0x01)
            {
                secondStartCodeIndex = i;
                _spsSize = secondStartCodeIndex;   // includes the header in the size
                break;
            }
        }

        // find what the second NALU type is
        nalu_type = (frame[secondStartCodeIndex + 4] & 0x1F);
        NSLog(@"~~~~~~~ Received NALU Type \"%@\" ~~~~~~~~", naluTypesStrings[nalu_type]);
    }

    // type 8 is the PPS parameter NALU
    if(nalu_type == 8)
    {
        // find where the NALU after this one starts so we know how long the PPS parameter is
        for (int i = _spsSize + 4; i < _spsSize + 30; i++)
        {
            if (frame[i] == 0x00 && frame[i+1] == 0x00 && frame[i+2] == 0x00 && frame[i+3] == 0x01)
            {
                thirdStartCodeIndex = i;
                _ppsSize = thirdStartCodeIndex - _spsSize;
                break;
            }
        }

        // allocate enough data to fit the SPS and PPS parameters into our data objects.
        // VTD doesn't want you to include the start code header (4 bytes long) so we add the - 4 here
        sps = malloc(_spsSize - 4);
        pps = malloc(_ppsSize - 4);

        // copy in the actual sps and pps values, again ignoring the 4 byte header
        memcpy (sps, &frame[4], _spsSize-4);
        memcpy (pps, &frame[_spsSize+4], _ppsSize-4);

        // now we set our H264 parameters
        uint8_t*  parameterSetPointers[2] = {sps, pps};
        size_t parameterSetSizes[2] = {_spsSize-4, _ppsSize-4};

        // suggestion from @Kris Dude's answer below
        if (_formatDesc) 
        {
            CFRelease(_formatDesc);
            _formatDesc = NULL;
        }

        status = CMVideoFormatDescriptionCreateFromH264ParameterSets(kCFAllocatorDefault, 2, 
                                                (const uint8_t *const*)parameterSetPointers, 
                                                parameterSetSizes, 4, 
                                                &_formatDesc);

        NSLog(@"\t\t Creation of CMVideoFormatDescription: %@", (status == noErr) ? @"successful!" : @"failed...");
        if(status != noErr) NSLog(@"\t\t Format Description ERROR type: %d", (int)status);

        // See if decomp session can convert from previous format description 
        // to the new one, if not we need to remake the decomp session.
        // This snippet was not necessary for my applications but it could be for yours
        /*BOOL needNewDecompSession = (VTDecompressionSessionCanAcceptFormatDescription(_decompressionSession, _formatDesc) == NO);
         if(needNewDecompSession)
         {
             [self createDecompSession];
         }*/

        // now lets handle the IDR frame that (should) come after the parameter sets
        // I say "should" because that's how I expect my H264 stream to work, YMMV
        nalu_type = (frame[thirdStartCodeIndex + 4] & 0x1F);
        NSLog(@"~~~~~~~ Received NALU Type \"%@\" ~~~~~~~~", naluTypesStrings[nalu_type]);
    }

    // create our VTDecompressionSession.  This isnt neccessary if you choose to use AVSampleBufferDisplayLayer
    if((status == noErr) && (_decompressionSession == NULL))
    {
        [self createDecompSession];
    }

    // type 5 is an IDR frame NALU.  The SPS and PPS NALUs should always be followed by an IDR (or IFrame) NALU, as far as I know
    if(nalu_type == 5)
    {
        // find the offset, or where the SPS and PPS NALUs end and the IDR frame NALU begins
        int offset = _spsSize + _ppsSize;
        blockLength = frameSize - offset;
        data = malloc(blockLength);
        data = memcpy(data, &frame[offset], blockLength);

        // replace the start code header on this NALU with its size.
        // AVCC format requires that you do this.  
        // htonl converts the unsigned int from host to network byte order
        uint32_t dataLength32 = htonl (blockLength - 4);
        memcpy (data, &dataLength32, sizeof (uint32_t));

        // create a block buffer from the IDR NALU
        status = CMBlockBufferCreateWithMemoryBlock(NULL, data,  // memoryBlock to hold buffered data
                                                    blockLength,  // block length of the mem block in bytes.
                                                    kCFAllocatorNull, NULL,
                                                    0, // offsetToData
                                                    blockLength,   // dataLength of relevant bytes, starting at offsetToData
                                                    0, &blockBuffer);

        NSLog(@"\t\t BlockBufferCreation: \t %@", (status == kCMBlockBufferNoErr) ? @"successful!" : @"failed...");
    }

    // NALU type 1 is non-IDR (or PFrame) picture
    if (nalu_type == 1)
    {
        // non-IDR frames do not have an offset due to SPS and PSS, so the approach
        // is similar to the IDR frames just without the offset
        blockLength = frameSize;
        data = malloc(blockLength);
        data = memcpy(data, &frame[0], blockLength);

        // again, replace the start header with the size of the NALU
        uint32_t dataLength32 = htonl (blockLength - 4);
        memcpy (data, &dataLength32, sizeof (uint32_t));

        status = CMBlockBufferCreateWithMemoryBlock(NULL, data,  // memoryBlock to hold data. If NULL, block will be alloc when needed
                                                    blockLength,  // overall length of the mem block in bytes
                                                    kCFAllocatorNull, NULL,
                                                    0,     // offsetToData
                                                    blockLength,  // dataLength of relevant data bytes, starting at offsetToData
                                                    0, &blockBuffer);

        NSLog(@"\t\t BlockBufferCreation: \t %@", (status == kCMBlockBufferNoErr) ? @"successful!" : @"failed...");
    }

    // now create our sample buffer from the block buffer,
    if(status == noErr)
    {
        // here I'm not bothering with any timing specifics since in my case we displayed all frames immediately
        const size_t sampleSize = blockLength;
        status = CMSampleBufferCreate(kCFAllocatorDefault,
                                      blockBuffer, true, NULL, NULL,
                                      _formatDesc, 1, 0, NULL, 1,
                                      &sampleSize, &sampleBuffer);

        NSLog(@"\t\t SampleBufferCreate: \t %@", (status == noErr) ? @"successful!" : @"failed...");
    }

    if(status == noErr)
    {
        // set some values of the sample buffer's attachments
        CFArrayRef attachments = CMSampleBufferGetSampleAttachmentsArray(sampleBuffer, YES);
        CFMutableDictionaryRef dict = (CFMutableDictionaryRef)CFArrayGetValueAtIndex(attachments, 0);
        CFDictionarySetValue(dict, kCMSampleAttachmentKey_DisplayImmediately, kCFBooleanTrue);

        // either send the samplebuffer to a VTDecompressionSession or to an AVSampleBufferDisplayLayer
        [self render:sampleBuffer];
    }

    // free memory to avoid a memory leak, do the same for sps, pps and blockbuffer
    if (NULL != data)
    {
        free (data);
        data = NULL;
    }
}

以下方法创建您的VTD会话。收到参数时,请重新创建它。(可以肯定的是,您不必在每次接收参数时都重新创建它。)

如果要设置目标的属性CVPixelBuffer,请读取CoreVideo PixelBufferAttributes值并将其放入NSDictionary *destinationImageBufferAttributes

-(void) createDecompSession
{
    // make sure to destroy the old VTD session
    _decompressionSession = NULL;
    VTDecompressionOutputCallbackRecord callBackRecord;
    callBackRecord.decompressionOutputCallback = decompressionSessionDecodeFrameCallback;

    // this is necessary if you need to make calls to Objective C "self" from within in the callback method.
    callBackRecord.decompressionOutputRefCon = (__bridge void *)self;

    // you can set some desired attributes for the destination pixel buffer.  I didn't use this but you may
    // if you need to set some attributes, be sure to uncomment the dictionary in VTDecompressionSessionCreate
    NSDictionary *destinationImageBufferAttributes = [NSDictionary dictionaryWithObjectsAndKeys:
                                                      [NSNumber numberWithBool:YES],
                                                      (id)kCVPixelBufferOpenGLESCompatibilityKey,
                                                      nil];

    OSStatus status =  VTDecompressionSessionCreate(NULL, _formatDesc, NULL,
                                                    NULL, // (__bridge CFDictionaryRef)(destinationImageBufferAttributes)
                                                    &callBackRecord, &_decompressionSession);
    NSLog(@"Video Decompression Session Create: \t %@", (status == noErr) ? @"successful!" : @"failed...");
    if(status != noErr) NSLog(@"\t\t VTD ERROR type: %d", (int)status);
}

现在,每当VTD完成解压缩发送给它的任何帧时,都会调用此方法。即使出现错误或丢帧,也会调用此方法。

void decompressionSessionDecodeFrameCallback(void *decompressionOutputRefCon,
                                             void *sourceFrameRefCon,
                                             OSStatus status,
                                             VTDecodeInfoFlags infoFlags,
                                             CVImageBufferRef imageBuffer,
                                             CMTime presentationTimeStamp,
                                             CMTime presentationDuration)
{
    THISCLASSNAME *streamManager = (__bridge THISCLASSNAME *)decompressionOutputRefCon;

    if (status != noErr)
    {
        NSError *error = [NSError errorWithDomain:NSOSStatusErrorDomain code:status userInfo:nil];
        NSLog(@"Decompressed error: %@", error);
    }
    else
    {
        NSLog(@"Decompressed sucessfully");

        // do something with your resulting CVImageBufferRef that is your decompressed frame
        [streamManager displayDecodedFrame:imageBuffer];
    }
}

这是我们实际上将sampleBuffer发送到VTD进行解码的地方。

- (void) render:(CMSampleBufferRef)sampleBuffer
{
    VTDecodeFrameFlags flags = kVTDecodeFrame_EnableAsynchronousDecompression;
    VTDecodeInfoFlags flagOut;
    NSDate* currentTime = [NSDate date];
    VTDecompressionSessionDecodeFrame(_decompressionSession, sampleBuffer, flags,
                                      (void*)CFBridgingRetain(currentTime), &flagOut);

    CFRelease(sampleBuffer);

    // if you're using AVSampleBufferDisplayLayer, you only need to use this line of code
    // [videoLayer enqueueSampleBuffer:sampleBuffer];
}

如果您使用AVSampleBufferDisplayLayer,请确保在viewDidLoad或其他init方法内部初始化这样的图层。

-(void) viewDidLoad
{
    // create our AVSampleBufferDisplayLayer and add it to the view
    videoLayer = [[AVSampleBufferDisplayLayer alloc] init];
    videoLayer.frame = self.view.frame;
    videoLayer.bounds = self.view.bounds;
    videoLayer.videoGravity = AVLayerVideoGravityResizeAspect;

    // set Timebase, you may need this if you need to display frames at specific times
    // I didn't need it so I haven't verified that the timebase is working
    CMTimebaseRef controlTimebase;
    CMTimebaseCreateWithMasterClock(CFAllocatorGetDefault(), CMClockGetHostTimeClock(), &controlTimebase);

    //videoLayer.controlTimebase = controlTimebase;
    CMTimebaseSetTime(self.videoLayer.controlTimebase, kCMTimeZero);
    CMTimebaseSetRate(self.videoLayer.controlTimebase, 1.0);

    [[self.view layer] addSublayer:videoLayer];
}

2
这很棒!在找到这个很棒的例子之前,我实际上已经开始工作了。收到错误VTDecompressionSessionDecodeFrame:-12911。确保将正确的blockLength发送到CMBlockBufferCreateWithMemoryBlock
3rdLion

4
我见过的最好的SO之一。谢谢你 我希望在尝试使我的应用程序进行硬件解码时拥有此资源,这将使它变得更加容易。
布雷登

2
@DevranCosmoUenal目前我无法在tvOS上发表评论。我确实知道开发人员多年来一直在要求获得硬件加速解码的访问权限(自iOS4左右),然后苹果向他们提供了iOS的VideoToolbox。那么谁知道我们什么时候才能在tvOS上获得它。也许AVAsset和AVCapture可以为您提供帮助,但是我根本没有看过tvOS。
奥利维亚·斯托克

2
@GaojinHsu iOS prevents background apps from accessing the graphics processor so that the frontmost app is always able to present a great experience to the user. developer.apple.com/library/ios/documentation/3DDrawing/...
德米特罗Hutsuliak

2
@ LivyStork方法中的isIFramein参数receivedRawVideoFrame:withSize:isIFrame是多余的
mrvincenzo

20

如果您在框架中找不到VTD错误代码,我决定将它们包括在这里。(同样,所有这些错误以及更多错误都可以VideoToolbox.framework在项目导航器中的文件内找到)VTErrors.h

如果您做错了什么,您将在VTD解码帧回调中或在创建VTD会话时得到这些错误代码之一。

kVTPropertyNotSupportedErr              = -12900,
kVTPropertyReadOnlyErr                  = -12901,
kVTParameterErr                         = -12902,
kVTInvalidSessionErr                    = -12903,
kVTAllocationFailedErr                  = -12904,
kVTPixelTransferNotSupportedErr         = -12905, // c.f. -8961
kVTCouldNotFindVideoDecoderErr          = -12906,
kVTCouldNotCreateInstanceErr            = -12907,
kVTCouldNotFindVideoEncoderErr          = -12908,
kVTVideoDecoderBadDataErr               = -12909, // c.f. -8969
kVTVideoDecoderUnsupportedDataFormatErr = -12910, // c.f. -8970
kVTVideoDecoderMalfunctionErr           = -12911, // c.f. -8960
kVTVideoEncoderMalfunctionErr           = -12912,
kVTVideoDecoderNotAvailableNowErr       = -12913,
kVTImageRotationNotSupportedErr         = -12914,
kVTVideoEncoderNotAvailableNowErr       = -12915,
kVTFormatDescriptionChangeNotSupportedErr   = -12916,
kVTInsufficientSourceColorDataErr       = -12917,
kVTCouldNotCreateColorCorrectionDataErr = -12918,
kVTColorSyncTransformConvertFailedErr   = -12919,
kVTVideoDecoderAuthorizationErr         = -12210,
kVTVideoEncoderAuthorizationErr         = -12211,
kVTColorCorrectionPixelTransferFailedErr    = -12212,
kVTMultiPassStorageIdentifierMismatchErr    = -12213,
kVTMultiPassStorageInvalidErr           = -12214,
kVTFrameSiloInvalidTimeStampErr         = -12215,
kVTFrameSiloInvalidTimeRangeErr         = -12216,
kVTCouldNotFindTemporalFilterErr        = -12217,
kVTPixelTransferNotPermittedErr         = -12218,

11

可以在Josh Baker的Avios库中找到一个很好的Swift示例:https//github.com/tidwall/Avios

请注意,Avios当前期望用户以NAL起始码处理分块数据,但确实会处理从此点开始的数据解码。

同样值得一看的是基于Swift的RTMP库HaishinKit(以前称为“ LF”),它具有自己的解码实现,包括更强大的NALU解析:https : //github.com/shogo4405/lf.swift


是否可以使用p2p多对等连接对H264实时流视频进行编码和解码?@leppert
Sreejith的

@leppert,您好,我正在尝试使用Avios解码流数据。您是什么意思handle chunking data at NAL start codes
Ramsundar Shandilya,2017年

@RamsundarShandilyayumichan.net/ video
leppert

5

除了上面的VTErrors,我认为值得添加在尝试Livy的示例时可能遇到的CMFormatDescription,CMBlockBuffer和CMSampleBuffer错误。

kCMFormatDescriptionError_InvalidParameter  = -12710,
kCMFormatDescriptionError_AllocationFailed  = -12711,
kCMFormatDescriptionError_ValueNotAvailable = -12718,

kCMBlockBufferNoErr                             = 0,
kCMBlockBufferStructureAllocationFailedErr      = -12700,
kCMBlockBufferBlockAllocationFailedErr          = -12701,
kCMBlockBufferBadCustomBlockSourceErr           = -12702,
kCMBlockBufferBadOffsetParameterErr             = -12703,
kCMBlockBufferBadLengthParameterErr             = -12704,
kCMBlockBufferBadPointerParameterErr            = -12705,
kCMBlockBufferEmptyBBufErr                      = -12706,
kCMBlockBufferUnallocatedBlockErr               = -12707,
kCMBlockBufferInsufficientSpaceErr              = -12708,

kCMSampleBufferError_AllocationFailed             = -12730,
kCMSampleBufferError_RequiredParameterMissing     = -12731,
kCMSampleBufferError_AlreadyHasDataBuffer         = -12732,
kCMSampleBufferError_BufferNotReady               = -12733,
kCMSampleBufferError_SampleIndexOutOfRange        = -12734,
kCMSampleBufferError_BufferHasNoSampleSizes       = -12735,
kCMSampleBufferError_BufferHasNoSampleTimingInfo  = -12736,
kCMSampleBufferError_ArrayTooSmall                = -12737,
kCMSampleBufferError_InvalidEntryCount            = -12738,
kCMSampleBufferError_CannotSubdivide              = -12739,
kCMSampleBufferError_SampleTimingInfoInvalid      = -12740,
kCMSampleBufferError_InvalidMediaTypeForOperation = -12741,
kCMSampleBufferError_InvalidSampleData            = -12742,
kCMSampleBufferError_InvalidMediaFormat           = -12743,
kCMSampleBufferError_Invalidated                  = -12744,
kCMSampleBufferError_DataFailed                   = -16750,
kCMSampleBufferError_DataCanceled                 = -16751,

2

@Livy可以消除内存泄漏,然后CMVideoFormatDescriptionCreateFromH264ParameterSets再添加以下内容:

if (_formatDesc) {
    CFRelease(_formatDesc);
    _formatDesc = NULL;
}
By using our site, you acknowledge that you have read and understand our Cookie Policy and Privacy Policy.
Licensed under cc by-sa 3.0 with attribution required.