Kamera

  • Hi,
    bin grad dabei ein App zu programmieren ähnlich wie Hipstamatic. Habe schon alle Grafken gemacht die ich benötige weis jedoch nicht wie ich die Kamera einbauen soll. Ich brauche einen Previewlayer und eine Take Picture funktion, kann mir jemand dabei helfen ?
  • Capturing Still Images
    You use an AVCaptureStillImageOutput
    output if you want to capture still images with accompanying metadata.
    The resolution of the image depends on the preset for the session, as
    illustrated in this table:

    Preset
    iPhone 3G
    iPhone 3GS
    iPhone 4 (Back)
    iPhone 4 (Front)
    High
    400x304
    640x480
    1280x720
    640x480
    Medium
    400x304
    480x360
    480x360
    480x360
    Low
    400x304
    192x144
    192x144
    192x144
    640x480
    N/A
    640x480
    640x480
    640x480
    1280x720
    N/A
    N/A
    1280x720
    N/A
    Photo
    1600x1200
    2048x1536
    2592x1936
    640x480

    Pixel and Encoding Formats
    Different devices support different image formats:

    iPhone 3G
    iPhone 3GS
    iPhone 4
    yuvs, 2vuy, BGRA, jpeg
    420f, 420v, BGRA, jpeg
    420f, 420v, BGRA, jpeg

    You can find out what pixel and codec types are supported using availableImageDataCVPixelFormatTypes and availableImageDataCodecTypes respectively. You set the outputSettings dictionary to specify the image format you want, for example:

    AVCaptureStillImageOutput *stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
    NSDictionary *outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys:
    AVVideoCodecJPEG, AVVideoCodecKey, nil];
    [stillImageOutput setOutputSettings:outputSettings];

    If
    you want to capture a JPEG image, you should typically not specify your
    own compression format. Instead, you should let the still image output
    do the compression for you, since its compression is
    hardware-accelerated. If you need a data representation of the image,
    you can use Capturing an Image
    When you want to capture an image, you send the output a captureStillImageAsynchronouslyFromConnection:completionHandler:
    message. The first argument is the connection you want to use for the
    capture. You need to look for the connection whose input port is
    collecting video:

    AVCaptureConnection *videoConnection = nil;
    for (AVCaptureConnection *connection in stillImageOutput.connections) {
    for (AVCaptureInputPort *port in [connection inputPorts]) {
    if ([[port mediaType] isEqual:AVMediaTypeVideo] ) {
    videoConnection = connection;
    break;
    }
    }
    if (videoConnection) { break; }
    }

    The second argument to captureStillImageAsynchronouslyFromConnection:completionHandler: is a block
    that takes two arguments: a CMSampleBuffer containing the image data,
    and an error. The sample buffer itself may contain metadata, such as an
    Exif dictionary, as an attachment. You can modify the attachments should
    you want, but note the optimization for JPEG images discussed in Showing the User What’s Being Recorded
    You
    can provide the user with a preview of what’s being recorded by the
    camera using a preview layer, or by the microphone by monitoring the
    audio channel.
    Video Preview
    You can provide the user with a preview of what’s being recorded using an AVCaptureVideoPreviewLayer object. AVCaptureVideoPreviewLayer is a subclass ofCALayer (see Core Animation Programming Guide. You don’t need any outputs to show the preview.
    Unlike
    a capture output, a video preview layer retains the session with which
    it is associated. This is to ensure that the session is not deallocated
    while the layer is attempting to display video. This is reflected in the
    way you initialize a preview layer:

    AVCaptureSession *captureSession = <#Get a capture session#>;
    CALayer *viewLayer = <#Get a layer from the view in which you want to present the preview#>;

    AVCaptureVideoPreviewLayer *captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:captureSession];
    [viewLayer addSublayer:captureVideoPreviewLayer];

    In general, the preview layer behaves like any other CALayer object in the render tree (see Core Animation Programming Guide).
    You can scale the image and perform transformations, rotations and so
    on just as you would any layer. One difference is that you may need to
    set the layer’s orientation
    property to specify how it should rotate images coming from the camera.
    In addition, on iPhone 4 the preview layer supports mirroring (this is
    the default when previewing the front-facing camera).
    Video Gravity Modes
    The preview layer supports three gravity modes that you set using Using “Tap to Focus” With a Preview
    You
    need to take care when implementing tap-to-focus in conjunction with a
    preview layer. You must account for the preview orientation and gravity
    of the layer, and the possibility that the preview may be mirrored.
    Vielen Dank .
    Ich glaub das ist was ich benötige aber da ich neu in der scene bin verstehe ich nur Bahnhof ;) Was genau muss ich den in meinen ViewController.h und .m einfügen?
  • Äh was soll das? Das ist immerhin urheberrechtlich geschütztes Material? Der Betreiber des Forums könnte hier ziemlich Ärger bekommen.
    Wie auch immer, du solltest dich erstmal mit den Grundlagen beschäftigen oder einen Entwickler mit der Umsetzung beauftragen. Dafür gibt es den Stellenmarkt hier.