更新1
我在这里找到了一个很有前途的苹果医生描述CGDataProviderCopyData。我认为通过从上下文中提取一幅图并提取像素值来实现我最初的要求。
示例代码使用了CGImageGetDataProvider和其他一些我不理解的特性,因此我无法理解如何实现它们的功能。如何从变量con或其上下文中获取信息并访问像素?
更新1
更新0
也许我问错了问题。在我的例子中,CGContextDrawImage将图像从104x104x13缩放到13x13,然后CGContextDrawImage显示图像。也许我需要找到CGContextDrawImage的一部分,它只是在进行缩放。
我在“initWithData:scale:类引用”中找到了UIImage。但我不知道如何提供这种方法的数据。我要的刻度是0.25。
- (id)initWithData:(NSData *)data scale:(CGFloat)scale有人能告诉我如何为我的应用程序提供(NSData *)data吗?
更新0
//
// BSViewController.m
#import "BSViewController.h"
@interface BSViewController ()
@end
@implementation BSViewController
- (IBAction) chooseImage:(id) sender{
[self.view addSubview:self.imageView];
UIImage* testCard = [UIImage imageNamed:@"ipad 7D.JPG"];
CGSize sz = [testCard size];
CGImageRef num = CGImageCreateWithImageInRect([testCard CGImage],CGRectMake(532, 0, 104, 104));
UIGraphicsBeginImageContext(CGSizeMake( 250,650));
CGContextRef con = UIGraphicsGetCurrentContext();
CGContextDrawImage(con, CGRectMake(0, 0, 13, 13) ,num);
UIGraphicsEndImageContext();
self.workingImage = CFBridgingRelease(num);
CGImageRelease(num);我正在努力从上到下的过渡。
更具体地说,我需要向imageRef提供正确的输入。我想给imageRef一个13乘13的图像,但是当我给imageRef num它得到104×104的图像,当我给imageRef con它得到0乘0的图像。(在底部还提到了另一种暂定办法。)
下面的代码是Brandon Trebitowski的
CGImageRef imageRef = num;
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
NSLog(@"the width: %u", width);
NSLog(@"the height: %u", height);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
unsigned char *rawData = (unsigned char*) calloc(height * width * 4, sizeof(unsigned char));
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
NSLog(@"Stop 3");
// Now your rawData contains the image data in the RGBA8888 pixel format.
int byteIndex = (bytesPerRow * 0) + 0 * bytesPerPixel;
for (int ii = 0 ; ii < width * height ; ++ii)
{
int outputColor = (rawData[byteIndex] + rawData[byteIndex+1] + rawData[byteIndex+2]) / 3;
rawData[byteIndex] = (char) (outputColor);
rawData[byteIndex+1] = (char) (outputColor);
rawData[byteIndex+2] = (char) (outputColor);
byteIndex += 4;
}
}
- (void)viewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib.
}
- (void)didReceiveMemoryWarning
{
[super didReceiveMemoryWarning];
// Dispose of any resources that can be recreated.
}
@end我还尝试过定义self.workingImage (以下两种方法之一),并将其提供给imageRef。
self.workingImage = num;
self.workingImage = (__bridge UIImage *)(num);
CGImageRef imageRef = [self.workingImage CGImage];发布于 2013-01-25 20:02:03
我改变了2行,增加了3行,得到了我想要的结果。主要的更改是使用UIGraphicsBeginImageContextWithOptions而不是UIGraphicsBeginImageContext,这样就可以在绘制图像之前完成重新标度。
@implementation BSViewController
- (IBAction) chooseImage:(id) sender{
[self.view addSubview:self.imageView];
UIImage* testCard = [UIImage imageNamed:@"ipad 7D.JPG"];
CGSize sz = [testCard size];
CGImageRef num = CGImageCreateWithImageInRect([testCard CGImage],CGRectMake(532, 0, 104, 104));
UIGraphicsBeginImageContextWithOptions(CGSizeMake( 104,104), NO, 0.125); // Changed
CGContextRef con = UIGraphicsGetCurrentContext();
CGContextDrawImage(con, CGRectMake(0, 0, 104, 104) ,num); // Changed
UIImage* im = UIGraphicsGetImageFromCurrentImageContext(); // Added
UIGraphicsEndImageContext();
self.workingImage = CFBridgingRelease(num);
CGImageRelease(num);
UIImageView* iv = [[UIImageView alloc] initWithImage:im]; // Added
[self.imageView addSubview: iv]; // Added
CGImageRef imageRef = [im CGImage];
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);等。
https://stackoverflow.com/questions/14428053
复制相似问题