计数米粒


81

考虑一下这10张不同数量的白米谷物的图像。
这些仅是缩略图。单击图像以原尺寸查看。

A: B:C:D:E:一种 乙 C d Ë

F: G:H:I:J:F G H 一世 Ĵ

粒数: A: 3, B: 5, C: 12, D: 25, E: 50, F: 83, G: 120, H:150, I: 151, J: 200

注意...

  • 谷物可能会相互接触,但它们不会重叠。谷物的排列高度不得超过一个谷物。
  • 图像具有不同的尺寸,但是由于照相机和背景是固定的,因此所有图像的大小都是一致的。
  • 颗粒永远不会超出范围或接触图像边界。
  • 背景始终是相同的黄白色一致阴影。
  • 大小谷物均被视为一粒。

这5点是所有此类图像的保证。

挑战

编写一个程序,获取此类图像,并尽可能准确地计算出米粒的数量。

您的程序应使用图像的文件名并打印其计算出的颗粒数。您的程序必须至少适用于以下图像文件格式之一:JPEG,位图,PNG,GIF,TIFF(现在图像均为JPEG)。

可以使用图像处理和计算机视觉库。

您可能未对10个示例图像的输出进行硬编码。您的算法应适用于所有类似的米粒图像。如果图像区域小于2000 * 2000像素并且米粒少于300粒,那么它应该能够在一台像样的现代计算机上运行少于5分钟。

计分

对于10张图像中的每张,取实际颗粒数的绝对值减去程序预测的颗粒数。将这些绝对值相加即可得到分数。最低分获胜。分数为0是完美的。

如果是平局,则以票数最高的答案为准。我可能会在其他图像上测试您的程序,以验证其有效性和准确性。


1
当然有人必须尝试scikit学习!

大赛!:)顺便说一句-能告诉我们有关该挑战结束日期的一些信息吗?
cyriel 2014年

1
@Lembik下降到7 :)
belisarius博士2014年

5
有一天,一位水稻科学家将走过来,高兴地问这个问题的存在。
Nit 2014年

Answers:


22

Mathematica,得分:7

i = {"http://i.stack.imgur.com/8T6W2.jpg",  "http://i.stack.imgur.com/pgWt1.jpg", 
     "http://i.stack.imgur.com/M0K5w.jpg",  "http://i.stack.imgur.com/eUFNo.jpg", 
     "http://i.stack.imgur.com/2TFdi.jpg",  "http://i.stack.imgur.com/wX48v.jpg", 
     "http://i.stack.imgur.com/eXCGt.jpg",  "http://i.stack.imgur.com/9na4J.jpg",
     "http://i.stack.imgur.com/UMP9V.jpg",  "http://i.stack.imgur.com/nP3Hr.jpg"};

im = Import /@ i;

我认为函数的名称具有足够的描述性:

getSatHSVChannelAndBinarize[i_Image]             := Binarize@ColorSeparate[i, "HSB"][[2]]
removeSmallNoise[i_Image]                        := DeleteSmallComponents[i, 100]
fillSmallHoles[i_Image]                          := Closing[i, 1]
getMorphologicalComponentsAreas[i_Image]         := ComponentMeasurements[i, "Area"][[All, 2]]
roundAreaSizeToGrainCount[areaSize_, grainSize_] := Round[areaSize/grainSize]

一次处理所有图片:

counts = Plus @@@
  (roundAreaSizeToGrainCount[#, 2900] & /@
      (getMorphologicalComponentsAreas@
        fillSmallHoles@
         removeSmallNoise@
          getSatHSVChannelAndBinarize@#) & /@ im)

(* Output {3, 5, 12, 25, 49, 83, 118, 149, 152, 202} *)

分数是:

counts - {3, 5, 12, 25, 50, 83, 120, 150, 151, 200} // Abs // Total
(* 7 *)

在这里,您可以看到所用晶粒度的得分敏感性:

Mathematica图形


2
更清晰,谢谢!
卡尔文的爱好2014年

可以在python中复制此精确过程吗,或者Mathematica在这里做了python库不能做的特殊事情?

@Lembik不知道。这里没有python。抱歉。(不过,我怀疑的算法是否完全相同EdgeDetect[]DeleteSmallComponents[]Dilation[]在其他地方实现了)
belisarius博士2014年

55

Python,得分: 24 16

与Falko的解决方案一样,此解决方案基于测量“前景”面积并将其除以平均颗粒面积。

实际上,该程序尝试检测的是背景,而不是前景。利用米粒从不接触图像边界这一事实,该程序首先在左上角填充白色。如果相邻像素和当前像素的亮度之差在某个阈值之内,则填充算法会绘制相邻像素,从而进行调整以适应背景颜色的逐渐变化。在此阶段结束时,图像可能看起来像这样:

图1

如您所见,它在检测背景方面做得非常好,但是却忽略了“夹在”谷物之间的任何区域。我们通过估计每个像素的背景亮度并平移所有相等或较亮的像素来处理这些区域。此估算的工作方式如下:在洪水填充阶段,我们计算每一行和每一列的平均背景亮度。每个像素处的估计背景亮度是该像素处的行和列亮度的平均值。这将产生如下内容:

图2

编辑: 最后,将每个连续前景(即非白色)区域的面积除以平均的,预先计算的谷物面积,从而得出所述区域中谷物数量的估算值。这些数量的总和就是结果。最初,我们对整个前景区域进行了相同的操作,但是从字面上讲,这种方法更精细。


from sys import argv; from PIL import Image

# Init
I = Image.open(argv[1]); W, H = I.size; A = W * H
D = [sum(c) for c in I.getdata()]
Bh = [0] * H; Ch = [0] * H
Bv = [0] * W; Cv = [0] * W

# Flood-fill
Background = 3 * 255 + 1; S = [0]
while S:
    i = S.pop(); c = D[i]
    if c != Background:
        D[i] = Background
        Bh[i / W] += c; Ch[i / W] += 1
        Bv[i % W] += c; Cv[i % W] += 1
        S += [(i + o) % A for o in [1, -1, W, -W] if abs(D[(i + o) % A] - c) < 10]

# Eliminate "trapped" areas
for i in xrange(H): Bh[i] /= float(max(Ch[i], 1))
for i in xrange(W): Bv[i] /= float(max(Cv[i], 1))
for i in xrange(A):
    a = (Bh[i / W] + Bv[i % W]) / 2
    if D[i] >= a: D[i] = Background

# Estimate grain count
Foreground = -1; avg_grain_area = 3038.38; grain_count = 0
for i in xrange(A):
    if Foreground < D[i] < Background:
        S = [i]; area = 0
        while S:
            j = S.pop() % A
            if Foreground < D[j] < Background:
                D[j] = Foreground; area += 1
                S += [j - 1, j + 1, j - W, j + W]
        grain_count += int(round(area / avg_grain_area))

# Output
print grain_count

通过命令行获取输入文件名。

结果

      Actual  Estimate  Abs. Error
A         3         3           0
B         5         5           0
C        12        12           0
D        25        25           0
E        50        48           2
F        83        83           0
G       120       116           4
H       150       145           5
I       151       156           5
J       200       200           0
                        ----------
                Total:         16

一种 乙 C d Ë

F G H 一世 Ĵ


2
这是一个非常聪明的解决方案,不错的工作!
克里斯·西里菲斯

1
哪里avg_grain_area = 3038.38;来的?
njzk2

2
那算不算是hardcoding the result
njzk2 2014年

5
@ njzk2否。给定规则The images have different dimensions but the scale of the rice in all of them is consistent because the camera and background were stationary.这仅仅是代表该规则的值。但是,结果根据输入而变化。如果更改规则,则此值将更改,但结果将相同-基于输入。
亚当·戴维斯

6
我对平均面积的东西很好。图像之间的颗粒面积(大致)恒定。
卡尔文的爱好2014年

28

Python + OpenCV:得分27

水平线扫描

想法:一次扫描图像。对于每一行,计算遇到的米粒数量(通过检查像素是否从黑色变为白色或相反)。如果该行的谷物数量增加(与前一行相比),则意味着我们遇到了新的谷物。如果该数字减少,则意味着我们越过了一个谷粒。在这种情况下,请将+1加到总结果中。

在此处输入图片说明

Number in red = rice grains encountered for that line
Number in gray = total amount of grains encountered (what we are looking for)

由于算法的工作方式,拥有清晰的黑白图像非常重要。大量噪音会产生不良结果。首先使用洪水填充(解决方案类似于Ell答案)清除主要背景,然后应用阈值以产生黑白结果。

在此处输入图片说明

它远非完美,但在简单性方面却产生了良好的效果。可能有很多方法可以改善它(通过提供更好的黑白图像,在其他方向(例如:垂直,对角线)扫描,取平均值等)。

import cv2
import numpy
import sys

filename = sys.argv[1]
I = cv2.imread(filename, 0)
h,w = I.shape[:2]
diff = (3,3,3)
mask = numpy.zeros((h+2,w+2),numpy.uint8)
cv2.floodFill(I,mask,(0,0), (255,255,255),diff,diff)
T,I = cv2.threshold(I,180,255,cv2.THRESH_BINARY)
I = cv2.medianBlur(I, 7)

totalrice = 0
oldlinecount = 0
for y in range(0, h):
    oldc = 0
    linecount = 0
    start = 0   
    for x in range(0, w):
        c = I[y,x] < 128;
        if c == 1 and oldc == 0:
            start = x
        if c == 0 and oldc == 1 and (x - start) > 10:
            linecount += 1
        oldc = c
    if oldlinecount != linecount:
        if linecount < oldlinecount:
            totalrice += oldlinecount - linecount
        oldlinecount = linecount
print totalrice

每个图像的错误:0、0、0、3、0、12、4、0、7、1


24

Python + OpenCV:得分84

这是第一次天真尝试。它使用带有手动调整参数的自适应阈值,关闭一些孔,随后进行腐蚀和稀释,并从前景区域得出晶粒数。

import cv2
import numpy as np

filename = raw_input()

I = cv2.imread(filename, 0)
I = cv2.medianBlur(I, 3)
bw = cv2.adaptiveThreshold(I, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY, 101, 1)

kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (17, 17))
bw = cv2.dilate(cv2.erode(bw, kernel), kernel)

print np.round_(np.sum(bw == 0) / 3015.0)

在这里,您可以看到中间的二进制图像(黑色是前景):

在此处输入图片说明

每个图像的误差为0、0、2、2、4、0、27、42、0和7颗粒。


20

C#+ OpenCvSharp,得分:2

这是我的第二次尝试。它与我的第一次尝试(更加简单)完全不同,因此我将其作为单独的解决方案发布。

基本思想是通过迭代椭圆拟合来识别和标记每个单独的谷物。然后从源中删除该纹理的像素,并尝试找到下一个纹理,直到每个像素都已标记。

这不是最漂亮的解决方案。这是一个有600行代码的巨型猪。最大图像需要1.5分钟。我真的很抱歉凌乱的代码。

思考这个问题的参数和方法太多,以至于我非常害怕针对10个示例图像过度适合我的程序。最终分数2几乎肯定是过度拟合的情况:我有两个参数,average grain size in pixelminimum ratio of pixel / elipse_area,最后我只用尽了这两个参数的所有组合,直到获得最低分数。我不确定这是否完全符合这项挑战的规则。

average_grain_size_in_pixel = 2530
pixel / elipse_area >= 0.73

但是,即使没有这些过紧的离合器,效果也还是不错的。如果没有固定的晶粒尺寸或像素比率,仅通过从训练图像中估算平均晶粒尺寸,得分仍然是27。

而且我得到的结果不仅是数量,还有每个晶粒的实际位置,方向和形状。少量错误贴标签的谷物,但总体上大多数标签与实际谷物准确匹配:

A 一种 B 乙 C C D d EË

F F G G H H I 一世 JĴ

(单击每个图像查看完整大小的版本)

在完成此标记步骤之后,我的程序将查看每个单独的纹理,并根据像素数和像素/椭圆面积比估算是否

  • 一粒(+1)
  • 多个颗粒被误标记为一个(+ X)
  • 斑点太小而不能变成颗粒(+0)

每个图像的错误分数是 A:0; B:0; C:0; D:0; E:2; F:0; G:0 ; H:0; I:0, J:0

但是,实际错误可能会更高。同一张图片中的某些错误会相互抵消。图像H特别具有严重错误标记的颗粒,而图像E中的标签大多数是正确的

这个概念有些人为的:

  • 首先,通过饱和通道上的otsu阈值将前景分开(有关详细信息,请参阅我之前的回答)

  • 重复直到没有剩余像素为止:

    • 选择最大的斑点
    • 在该Blob上选择10个随机边缘像素作为颗粒的起始位置

    • 对于每个起点

      • 假设此位置的高度和宽度为10像素的颗粒。

      • 重复直到收敛

        • 从该点以不同的角度径向向外移动,直到遇到边缘像素(白到黑)

        • 找到的像素应该应该是单个颗粒的边缘像素。尝试通过舍弃离假定椭圆更远的像素,将离群值与离群值分开

        • 反复尝试使椭圆通过子集的一部分来拟合,并保持最佳椭圆(RANSACK)

        • 用找到的椭圆更新谷物的位置,方向,宽度和高度

        • 如果谷物位置变化不大,请停止

    • 在10个拟合的晶粒中,根据形状,边缘像素数选择最佳的晶粒。丢弃其他人

    • 从源图像中删除该纹理的所有像素,然后重复

    • 最后,浏览找到的谷物列表,并将每个谷物计数为1谷物,0谷物(太小)或2谷物(太大)

我的主要问题之一是我不想实现完整的椭圆点距离度量,因为计算它本身是一个复杂的迭代过程。因此,我使用了OpenCV函数Ellipse2Poly和FitEllipse的各种变通办法,结果也不是很漂亮。

显然我也打破了codegolf的大小限制。

答案限制为30000个字符,我目前为34000个字符。因此,我将不得不在下面缩短一些代码。

完整的代码可以在http://pastebin.com/RgM7hMxq中找到

抱歉,我不知道有大小限制。

class Program
{
    static void Main(string[] args)
    {

                // Due to size constraints, I removed the inital part of my program that does background separation. For the full source, check the link, or see my previous program.


                // list of recognized grains
                List<Grain> grains = new List<Grain>();

                Random rand = new Random(4); // determined by fair dice throw, guaranteed to be random

                // repeat until we have found all grains (to a maximum of 10000)
                for (int numIterations = 0; numIterations < 10000; numIterations++ )
                {
                    // erode the image of the remaining foreground pixels, only big blobs can be grains
                    foreground.Erode(erodedForeground,null,7);

                    // pick a number of starting points to fit grains
                    List<CvPoint> startPoints = new List<CvPoint>();
                    using (CvMemStorage storage = new CvMemStorage())
                    using (CvContourScanner scanner = new CvContourScanner(erodedForeground, storage, CvContour.SizeOf, ContourRetrieval.List, ContourChain.ApproxNone))
                    {
                        if (!scanner.Any()) break; // no grains left, finished!

                        // search for grains within the biggest blob first (this is arbitrary)
                        var biggestBlob = scanner.OrderByDescending(c => c.Count()).First();

                        // pick 10 random edge pixels
                        for (int i = 0; i < 10; i++)
                        {
                            startPoints.Add(biggestBlob.ElementAt(rand.Next(biggestBlob.Count())).Value);
                        }
                    }

                    // for each starting point, try to fit a grain there
                    ConcurrentBag<Grain> candidates = new ConcurrentBag<Grain>();
                    Parallel.ForEach(startPoints, point =>
                    {
                        Grain candidate = new Grain(point);
                        candidate.Fit(foreground);
                        candidates.Add(candidate);
                    });

                    Grain grain = candidates
                        .OrderByDescending(g=>g.Converged) // we don't want grains where the iterative fit did not finish
                        .ThenBy(g=>g.IsTooSmall) // we don't want tiny grains
                        .ThenByDescending(g => g.CircumferenceRatio) // we want grains that have many edge pixels close to the fitted elipse
                        .ThenBy(g => g.MeanSquaredError)
                        .First(); // we only want the best fit among the 10 candidates

                    // count the number of foreground pixels this grain has
                    grain.CountPixel(foreground);

                    // remove the grain from the foreground
                    grain.Draw(foreground,CvColor.Black);

                    // add the grain to the colection fo found grains
                    grains.Add(grain);
                    grain.Index = grains.Count;

                    // draw the grain for visualisation
                    grain.Draw(display, CvColor.Random());
                    grain.DrawContour(display, CvColor.Random());
                    grain.DrawEllipse(display, CvColor.Random());

                    //display.SaveImage("10-foundGrains.png");
                }

                // throw away really bad grains
                grains = grains.Where(g => g.PixelRatio >= 0.73).ToList();

                // estimate the average grain size, ignoring outliers
                double avgGrainSize =
                    grains.OrderBy(g => g.NumPixel).Skip(grains.Count/10).Take(grains.Count*9/10).Average(g => g.NumPixel);

                //ignore the estimated grain size, use a fixed size
                avgGrainSize = 2530;

                // count the number of grains, using the average grain size
                double numGrains = grains.Sum(g => Math.Round(g.NumPixel * 1.0 / avgGrainSize));

                // get some statistics
                double avgWidth = grains.Where(g => Math.Round(g.NumPixel * 1.0 / avgGrainSize) == 1).Average(g => g.Width);
                double avgHeight = grains.Where(g => Math.Round(g.NumPixel * 1.0 / avgGrainSize) == 1).Average(g => g.Height);
                double avgPixelRatio = grains.Where(g => Math.Round(g.NumPixel * 1.0 / avgGrainSize) == 1).Average(g => g.PixelRatio);

                int numUndersized = grains.Count(g => Math.Round(g.NumPixel * 1.0 / avgGrainSize) < 1);
                int numOversized = grains.Count(g => Math.Round(g.NumPixel * 1.0 / avgGrainSize) > 1);

                double avgWidthUndersized = grains.Where(g => Math.Round(g.NumPixel * 1.0 / avgGrainSize) < 1).Select(g=>g.Width).DefaultIfEmpty(0).Average();
                double avgHeightUndersized = grains.Where(g => Math.Round(g.NumPixel * 1.0 / avgGrainSize) < 1).Select(g => g.Height).DefaultIfEmpty(0).Average();
                double avgGrainSizeUndersized = grains.Where(g => Math.Round(g.NumPixel * 1.0 / avgGrainSize) < 1).Select(g => g.NumPixel).DefaultIfEmpty(0).Average();
                double avgPixelRatioUndersized = grains.Where(g => Math.Round(g.NumPixel * 1.0 / avgGrainSize) < 1).Select(g => g.PixelRatio).DefaultIfEmpty(0).Average();

                double avgWidthOversized = grains.Where(g => Math.Round(g.NumPixel * 1.0 / avgGrainSize) > 1).Select(g => g.Width).DefaultIfEmpty(0).Average();
                double avgHeightOversized = grains.Where(g => Math.Round(g.NumPixel * 1.0 / avgGrainSize) > 1).Select(g => g.Height).DefaultIfEmpty(0).Average();
                double avgGrainSizeOversized = grains.Where(g => Math.Round(g.NumPixel * 1.0 / avgGrainSize) > 1).Select(g => g.NumPixel).DefaultIfEmpty(0).Average();
                double avgPixelRatioOversized = grains.Where(g => Math.Round(g.NumPixel * 1.0 / avgGrainSize) > 1).Select(g => g.PixelRatio).DefaultIfEmpty(0).Average();


                Console.WriteLine("===============================");
                Console.WriteLine("Grains: {0}|{1:0.} of {2} (e{3}), size {4:0.}px, {5:0.}x{6:0.}  {7:0.000}  undersized:{8}  oversized:{9}   {10:0.0} minutes  {11:0.0} s per grain",grains.Count,numGrains,expectedGrains[fileNo],expectedGrains[fileNo]-numGrains,avgGrainSize,avgWidth,avgHeight, avgPixelRatio,numUndersized,numOversized,watch.Elapsed.TotalMinutes, watch.Elapsed.TotalSeconds/grains.Count);



                // draw the description for each grain
                foreach (Grain grain in grains)
                {
                    grain.DrawText(avgGrainSize, display, CvColor.Black);
                }

                display.SaveImage("10-foundGrains.png");
                display.SaveImage("X-" + file + "-foundgrains.png");
            }
        }
    }
}



public class Grain
{
    private const int MIN_WIDTH = 70;
    private const int MAX_WIDTH = 130;
    private const int MIN_HEIGHT = 20;
    private const int MAX_HEIGHT = 35;

    private static CvFont font01 = new CvFont(FontFace.HersheyPlain, 0.5, 1);
    private Random random = new Random(4); // determined by fair dice throw; guaranteed to be random


    /// <summary> center of grain </summary>
    public CvPoint2D32f Position { get; private set; }
    /// <summary> Width of grain (always bigger than height)</summary>
    public float Width { get; private set; }
    /// <summary> Height of grain (always smaller than width)</summary>
    public float Height { get; private set; }

    public float MinorRadius { get { return this.Height / 2; } }
    public float MajorRadius { get { return this.Width / 2; } }
    public double Angle { get; private set; }
    public double AngleRad { get { return this.Angle * Math.PI / 180; } }

    public int Index { get; set; }
    public bool Converged { get; private set; }
    public int NumIterations { get; private set; }
    public double CircumferenceRatio { get; private set; }
    public int NumPixel { get; private set; }
    public List<EllipsePoint> EdgePoints { get; private set; }
    public double MeanSquaredError { get; private set; }
    public double PixelRatio { get { return this.NumPixel / (Math.PI * this.MajorRadius * this.MinorRadius); } }
    public bool IsTooSmall { get { return this.Width < MIN_WIDTH || this.Height < MIN_HEIGHT; } }

    public Grain(CvPoint2D32f position)
    {
        this.Position = position;
        this.Angle = 0;
        this.Width = 10;
        this.Height = 10;
        this.MeanSquaredError = double.MaxValue;
    }

    /// <summary>  fit a single rice grain of elipsoid shape </summary>
    public void Fit(CvMat img)
    {
        // distance between the sampled points on the elipse circumference in degree
        int angularResolution = 1;

        // how many times did the fitted ellipse not change significantly?
        int numConverged = 0;

        // number of iterations for this fit
        int numIterations;

        // repeat until the fitted ellipse does not change anymore, or the maximum number of iterations is reached
        for (numIterations = 0; numIterations < 100 && !this.Converged; numIterations++)
        {
            // points on an ideal ellipse
            CvPoint[] points;
            Cv.Ellipse2Poly(this.Position, new CvSize2D32f(MajorRadius, MinorRadius), Convert.ToInt32(this.Angle), 0, 359, out points,
                            angularResolution);

            // points on the edge of foregroudn to background, that are close to the elipse
            CvPoint?[] edgePoints = new CvPoint?[points.Length];

            // remeber if the previous pixel in a given direction was foreground or background
            bool[] prevPixelWasForeground = new bool[points.Length];

            // when the first edge pixel is found, this value is updated
            double firstEdgePixelOffset = 200;

            // from the center of the elipse towards the outside:
            for (float offset = -this.MajorRadius + 1; offset < firstEdgePixelOffset + 20; offset++)
            {
                // draw an ellipse with the given offset
                Cv.Ellipse2Poly(this.Position, new CvSize2D32f(MajorRadius + offset, MinorRadius + (offset > 0 ? offset : MinorRadius / MajorRadius * offset)), Convert.ToInt32(this.Angle), 0,
                                359, out points, angularResolution);

                // for each angle
                Parallel.For(0, points.Length, i =>
                {
                    if (edgePoints[i].HasValue) return; // edge for this angle already found

                    // check if the current pixel is foreground
                    bool foreground = points[i].X < 0 || points[i].Y < 0 || points[i].X >= img.Cols || points[i].Y >= img.Rows
                                          ? false // pixel outside of image borders is always background
                                          : img.Get2D(points[i].Y, points[i].X).Val0 > 0;


                    if (prevPixelWasForeground[i] && !foreground)
                    {
                        // found edge pixel!
                        edgePoints[i] = points[i];

                        // if this is the first edge pixel we found, remember its offset. the other pixels cannot be too far away, so we can stop searching soon
                        if (offset < firstEdgePixelOffset && offset > 0) firstEdgePixelOffset = offset;
                    }

                    prevPixelWasForeground[i] = foreground;
                });
            }

            // estimate the distance of each found edge pixel from the ideal elipse
            // this is a hack, since the actual equations for estimating point-ellipse distnaces are complicated
            Cv.Ellipse2Poly(this.Position, new CvSize2D32f(MajorRadius, MinorRadius), Convert.ToInt32(this.Angle), 0, 360,
                            out points, angularResolution);
            var pointswithDistance =
                edgePoints.Select((p, i) => p.HasValue ? new EllipsePoint(p.Value, points[i], this.Position) : null)
                          .Where(p => p != null).ToList();

            if (pointswithDistance.Count == 0)
            {
                Console.WriteLine("no points found! should never happen! ");
                break;
            }

            // throw away all outliers that are too far outside the current ellipse
            double medianSignedDistance = pointswithDistance.OrderBy(p => p.SignedDistance).ElementAt(pointswithDistance.Count / 2).SignedDistance;
            var goodPoints = pointswithDistance.Where(p => p.SignedDistance < medianSignedDistance + 15).ToList();

            // do a sort of ransack fit with the inlier points to find a new better ellipse
            CvBox2D bestfit = ellipseRansack(goodPoints);

            // check if the fit has converged
            if (Math.Abs(this.Angle - bestfit.Angle) < 3 && // angle has not changed much (<3°)
                Math.Abs(this.Position.X - bestfit.Center.X) < 3 && // position has not changed much (<3 pixel)
                Math.Abs(this.Position.Y - bestfit.Center.Y) < 3)
            {
                numConverged++;
            }
            else
            {
                numConverged = 0;
            }

            if (numConverged > 2)
            {
                this.Converged = true;
            }

            //Console.WriteLine("Iteration {0}, delta {1:0.000} {2:0.000} {3:0.000}    {4:0.000}-{5:0.000} {6:0.000}-{7:0.000} {8:0.000}-{9:0.000}",
            //  numIterations, Math.Abs(this.Angle - bestfit.Angle), Math.Abs(this.Position.X - bestfit.Center.X), Math.Abs(this.Position.Y - bestfit.Center.Y), this.Angle, bestfit.Angle, this.Position.X, bestfit.Center.X, this.Position.Y, bestfit.Center.Y);

            double msr = goodPoints.Sum(p => p.Distance * p.Distance) / goodPoints.Count;

            // for drawing the polygon, filter the edge points more strongly
            if (goodPoints.Count(p => p.SignedDistance < 5) > goodPoints.Count / 2)
                goodPoints = goodPoints.Where(p => p.SignedDistance < 5).ToList();
            double cutoff = goodPoints.Select(p => p.Distance).OrderBy(d => d).ElementAt(goodPoints.Count * 9 / 10);
            goodPoints = goodPoints.Where(p => p.SignedDistance <= cutoff + 1).ToList();

            int numCertainEdgePoints = goodPoints.Count(p => p.SignedDistance > -2);
            this.CircumferenceRatio = numCertainEdgePoints * 1.0 / points.Count();

            this.Angle = bestfit.Angle;
            this.Position = bestfit.Center;
            this.Width = bestfit.Size.Width;
            this.Height = bestfit.Size.Height;
            this.EdgePoints = goodPoints;
            this.MeanSquaredError = msr;

        }
        this.NumIterations = numIterations;
        //Console.WriteLine("Grain found after {0,3} iterations, size={1,3:0.}x{2,3:0.}   pixel={3,5}    edgePoints={4,3}   msr={5,2:0.00000}", numIterations, this.Width,
        //                        this.Height, this.NumPixel, this.EdgePoints.Count, this.MeanSquaredError);
    }

    /// <summary> a sort of ransakc fit to find the best ellipse for the given points </summary>
    private CvBox2D ellipseRansack(List<EllipsePoint> points)
    {
        using (CvMemStorage storage = new CvMemStorage(0))
        {
            // calculate minimum bounding rectangle
            CvSeq<CvPoint> fullPointSeq = CvSeq<CvPoint>.FromArray(points.Select(p => p.Point), SeqType.EltypePoint, storage);
            var boundingRect = fullPointSeq.MinAreaRect2();

            // the initial candidate is the previously found ellipse
            CvBox2D bestEllipse = new CvBox2D(this.Position, new CvSize2D32f(this.Width, this.Height), (float)this.Angle);
            double bestError = calculateEllipseError(points, bestEllipse);

            Queue<EllipsePoint> permutation = new Queue<EllipsePoint>();
            if (points.Count >= 5) for (int i = -2; i < 20; i++)
                {
                    CvBox2D ellipse;
                    if (i == -2)
                    {
                        // first, try the ellipse described by the boundingg rect
                        ellipse = boundingRect;
                    }
                    else if (i == -1)
                    {
                        // then, try the best-fit ellipsethrough all points
                        ellipse = fullPointSeq.FitEllipse2();
                    }
                    else
                    {
                        // then, repeatedly fit an ellipse through a random sample of points

                        // pick some random points
                        if (permutation.Count < 5) permutation = new Queue<EllipsePoint>(permutation.Concat(points.OrderBy(p => random.Next())));
                        CvSeq<CvPoint> pointSeq = CvSeq<CvPoint>.FromArray(permutation.Take(10).Select(p => p.Point), SeqType.EltypePoint, storage);
                        for (int j = 0; j < pointSeq.Count(); j++) permutation.Dequeue();

                        // fit an ellipse through these points
                        ellipse = pointSeq.FitEllipse2();
                    }

                    // assure that the width is greater than the height
                    ellipse = NormalizeEllipse(ellipse);

                    // if the ellipse is too big for agrain, shrink it
                    ellipse = rightSize(ellipse, points.Where(p => isOnEllipse(p.Point, ellipse, 10, 10)).ToList());

                    // sometimes the ellipse given by FitEllipse2 is totally off
                    if (boundingRect.Center.DistanceTo(ellipse.Center) > Math.Max(boundingRect.Size.Width, boundingRect.Size.Height) * 2)
                    {
                        // ignore this bad fit
                        continue;
                    }

                    // estimate the error
                    double error = calculateEllipseError(points, ellipse);

                    if (error < bestError)
                    {
                        // found a better ellipse!
                        bestError = error;
                        bestEllipse = ellipse;
                    }
                }

            return bestEllipse;
        }
    }

    /// <summary> The proper thing to do would be to use the actual distance of each point to the elipse.
    /// However that formula is complicated, so ...  </summary>
    private double calculateEllipseError(List<EllipsePoint> points, CvBox2D ellipse)
    {
        const double toleranceInner = 5;
        const double toleranceOuter = 10;
        int numWrongPoints = points.Count(p => !isOnEllipse(p.Point, ellipse, toleranceInner, toleranceOuter));
        double ratioWrongPoints = numWrongPoints * 1.0 / points.Count;

        int numTotallyWrongPoints = points.Count(p => !isOnEllipse(p.Point, ellipse, 10, 20));
        double ratioTotallyWrongPoints = numTotallyWrongPoints * 1.0 / points.Count;

        // this pseudo-distance is biased towards deviations on the major axis
        double pseudoDistance = Math.Sqrt(points.Sum(p => Math.Abs(1 - ellipseMetric(p.Point, ellipse))) / points.Count);

        // primarily take the number of points far from the elipse border as an error metric.
        // use pseudo-distance to break ties between elipses with the same number of wrong points
        return ratioWrongPoints * 1000  + ratioTotallyWrongPoints+ pseudoDistance / 1000;
    }


    /// <summary> shrink an ellipse if it is larger than the maximum grain dimensions </summary>
    private static CvBox2D rightSize(CvBox2D ellipse, List<EllipsePoint> points)
    {
        if (ellipse.Size.Width < MAX_WIDTH && ellipse.Size.Height < MAX_HEIGHT) return ellipse;

        // elipse is bigger than the maximum grain size
        // resize it so it fits, while keeping one edge of the bounding rectangle constant

        double desiredWidth = Math.Max(10, Math.Min(MAX_WIDTH, ellipse.Size.Width));
        double desiredHeight = Math.Max(10, Math.Min(MAX_HEIGHT, ellipse.Size.Height));

        CvPoint2D32f average = points.Average();

        // get the corners of the surrounding bounding box
        var corners = ellipse.BoxPoints().ToList();

        // find the corner that is closest to the center of mass of the points
        int i0 = ellipse.BoxPoints().Select((point, index) => new { point, index }).OrderBy(p => p.point.DistanceTo(average)).First().index;
        CvPoint p0 = corners[i0];

        // find the two corners that are neighbouring this one
        CvPoint p1 = corners[(i0 + 1) % 4];
        CvPoint p2 = corners[(i0 + 3) % 4];

        // p1 is the next corner along the major axis (widht), p2 is the next corner along the minor axis (height)
        if (p0.DistanceTo(p1) < p0.DistanceTo(p2))
        {
            CvPoint swap = p1;
            p1 = p2;
            p2 = swap;
        }

        // calculate the three other corners with the desired widht and height

        CvPoint2D32f edge1 = (p1 - p0);
        CvPoint2D32f edge2 = p2 - p0;
        double edge1Length = Math.Max(0.0001, p0.DistanceTo(p1));
        double edge2Length = Math.Max(0.0001, p0.DistanceTo(p2));

        CvPoint2D32f newCenter = (CvPoint2D32f)p0 + edge1 * (desiredWidth / edge1Length) + edge2 * (desiredHeight / edge2Length);

        CvBox2D smallEllipse = new CvBox2D(newCenter, new CvSize2D32f((float)desiredWidth, (float)desiredHeight), ellipse.Angle);

        return smallEllipse;
    }

    /// <summary> assure that the width of the elipse is the major axis, and the height is the minor axis.
    /// Swap widht/height and rotate by 90° otherwise  </summary>
    private static CvBox2D NormalizeEllipse(CvBox2D ellipse)
    {
        if (ellipse.Size.Width < ellipse.Size.Height)
        {
            ellipse = new CvBox2D(ellipse.Center, new CvSize2D32f(ellipse.Size.Height, ellipse.Size.Width), (ellipse.Angle + 90 + 360) % 360);
        }
        return ellipse;
    }

    /// <summary> greater than 1 for points outside ellipse, smaller than 1 for points inside ellipse </summary>
    private static double ellipseMetric(CvPoint p, CvBox2D ellipse)
    {
        double theta = ellipse.Angle * Math.PI / 180;
        double u = Math.Cos(theta) * (p.X - ellipse.Center.X) + Math.Sin(theta) * (p.Y - ellipse.Center.Y);
        double v = -Math.Sin(theta) * (p.X - ellipse.Center.X) + Math.Cos(theta) * (p.Y - ellipse.Center.Y);

        return u * u / (ellipse.Size.Width * ellipse.Size.Width / 4) + v * v / (ellipse.Size.Height * ellipse.Size.Height / 4);
    }

    /// <summary> Is the point on the ellipseBorder, within a certain tolerance </summary>
    private static bool isOnEllipse(CvPoint p, CvBox2D ellipse, double toleranceInner, double toleranceOuter)
    {
        double theta = ellipse.Angle * Math.PI / 180;
        double u = Math.Cos(theta) * (p.X - ellipse.Center.X) + Math.Sin(theta) * (p.Y - ellipse.Center.Y);
        double v = -Math.Sin(theta) * (p.X - ellipse.Center.X) + Math.Cos(theta) * (p.Y - ellipse.Center.Y);

        double innerEllipseMajor = (ellipse.Size.Width - toleranceInner) / 2;
        double innerEllipseMinor = (ellipse.Size.Height - toleranceInner) / 2;
        double outerEllipseMajor = (ellipse.Size.Width + toleranceOuter) / 2;
        double outerEllipseMinor = (ellipse.Size.Height + toleranceOuter) / 2;

        double inside = u * u / (innerEllipseMajor * innerEllipseMajor) + v * v / (innerEllipseMinor * innerEllipseMinor);
        double outside = u * u / (outerEllipseMajor * outerEllipseMajor) + v * v / (outerEllipseMinor * outerEllipseMinor);
        return inside >= 1 && outside <= 1;
    }


    /// <summary> count the number of foreground pixels for this grain </summary>
    public int CountPixel(CvMat img)
    {
        // todo: this is an incredibly inefficient way to count, allocating a new image with the size of the input each time
        using (CvMat mask = new CvMat(img.Rows, img.Cols, MatrixType.U8C1))
        {
            mask.SetZero();
            mask.FillPoly(new CvPoint[][] { this.EdgePoints.Select(p => p.Point).ToArray() }, CvColor.White);
            mask.And(img, mask);
            this.NumPixel = mask.CountNonZero();
        }
        return this.NumPixel;
    }

    /// <summary> draw the recognized shape of the grain </summary>
    public void Draw(CvMat img, CvColor color)
    {
        img.FillPoly(new CvPoint[][] { this.EdgePoints.Select(p => p.Point).ToArray() }, color);
    }

    /// <summary> draw the contours of the grain </summary>
    public void DrawContour(CvMat img, CvColor color)
    {
        img.DrawPolyLine(new CvPoint[][] { this.EdgePoints.Select(p => p.Point).ToArray() }, true, color);
    }

    /// <summary> draw the best-fit ellipse of the grain </summary>
    public void DrawEllipse(CvMat img, CvColor color)
    {
        img.DrawEllipse(this.Position, new CvSize2D32f(this.MajorRadius, this.MinorRadius), this.Angle, 0, 360, color, 1);
    }

    /// <summary> print the grain index and the number of pixels divided by the average grain size</summary>
    public void DrawText(double averageGrainSize, CvMat img, CvColor color)
    {
        img.PutText(String.Format("{0}|{1:0.0}", this.Index, this.NumPixel / averageGrainSize), this.Position + new CvPoint2D32f(-5, 10), font01, color);
    }

}

这个解决方案让我有些尴尬,因为a)我不确定它是否符合挑战的精神,并且b)对于代码高尔夫球的答案来说太大了,并且缺乏其他解决方案的优雅。

另一方面,我对我在谷物标记上所取得的进展感到非常满意,而不仅仅是对它们进行计数,因此就是如此。


您知道您可以通过使用较小的名称并应用其他高尔夫技术来按大小减少代码长度;)
Optimizer

可能是,但是我不想进一步混淆这个解决方案。就我的口味而言,它太模糊了:)
HugoRune14年

+1是一项努力,因为您是唯一找到一种单独显示每个谷物的方法的人。不幸的是,代码有点肿,并且很大程度上依赖于硬编码的常量。我很想知道我编写的scanline算法在此(在单个彩色颗粒上)如何执行。
tigrou 2014年

我真的认为这是解决此类问题的正确方法(您认为+1),但是我想知道为什么您为什么要“选择10个随机边缘像素”,我认为如果选择具有最少边缘数(即伸出的零件)的边缘点,我认为(理论上)这将首先消除“最简单”的晶粒,您是否考虑过?
大卫·罗杰斯

我已经考虑过了,但还没有尝试过。“ 10个随机起始位置”是一个较晚的添加,它易于添加且易于并行化。在此之前,“一个随机的起始位置”要比“总是左上角”好得多。每次使用相同策略选择起始位置的危险在于,当我删除最佳拟合时,下一次可能会再次选择其他9个,随着时间的推移,这些起始位置中最差的位置将滞后并再次被选择,再次。突出的部分可能只是未完全去除的先前谷物的残留物。
HugoRune14年

17

C ++,OpenCV,得分:9

我的方法的基本思想很简单-尝试从图像中擦除单个颗粒(和“双颗粒”-2个(但不要更多!)颗粒,彼此靠近),然后使用基于面积的方法(例如Falko, Ell和belisarius)。使用此方法比标准“区域方法”好一点,因为它更容易找到良好的averagePixelsPerObject值。

(第一步)首先,我们需要在HSV图像的S通道上使用Otsu二值化。下一步是使用膨胀运算符来提高提取前景的质量。比我们需要找到轮廓。当然,有些轮廓不是米粒-我们需要删除太小的轮廓(面积小于averagePixelsPerObject / 4。在我的情况下,averagePixelsPerObject为2855)。现在,我们终于可以开始对晶粒进行计数了:)(第二步)查找单晶粒和双晶粒非常简单-只需在轮廓列表中查找具有特定范围内的轮廓的轮廓-如果轮廓区域在范围内,请从列表中删除并添加1 (如果是“双”谷物,则为2)到谷物计数器。(第三步)最后一步当然是将其余轮廓的面积除以averagePixelsPerObject值,然后将结果添加到晶粒计数器中。

图像(对于F.jpg图像)应该比单词更能说明这一点:
第一步(没有小轮廓(噪声)): 第一步(没有小轮廓(噪声))
第一步-仅简单轮廓: 第二步-仅简单轮廓
第一步-其余轮廓: 第三步-剩余轮廓

这是代码,很丑陋,但是应该可以正常工作。当然需要OpenCV。

#include "stdafx.h"

#include <cv.hpp>
#include <cxcore.h>
#include <highgui.h>
#include <vector>

using namespace cv;
using namespace std;

//A: 3, B: 5, C: 12, D: 25, E: 50, F: 83, G: 120, H:150, I: 151, J: 200
const int goodResults[] = {3, 5, 12, 25, 50, 83, 120, 150, 151, 200};
const float averagePixelsPerObject = 2855.0;

const int singleObjectPixelsCountMin = 2320;
const int singleObjectPixelsCountMax = 4060;

const int doubleObjectPixelsCountMin = 5000;
const int doubleObjectPixelsCountMax = 8000;

float round(float x)
{
    return x >= 0.0f ? floorf(x + 0.5f) : ceilf(x - 0.5f);
}

Mat processImage(Mat m, int imageIndex, int &error)
{
    int objectsCount = 0;
    Mat output, thresholded;
    cvtColor(m, output, CV_BGR2HSV);
    vector<Mat> channels;
    split(output, channels);
    threshold(channels[1], thresholded, 0, 255, CV_THRESH_OTSU | CV_THRESH_BINARY);
    dilate(thresholded, output, Mat()); //dilate to imporove quality of binary image
    imshow("thresholded", thresholded);
    int nonZero = countNonZero(output); //not realy important - just for tests
    if (imageIndex != -1)
        cout << "non zero: " << nonZero << ", average pixels per object: " << nonZero/goodResults[imageIndex] << endl;
    else
        cout << "non zero: " << nonZero << endl;

    vector<vector<Point>> contours, contoursOnlyBig, contoursWithoutSimpleObjects, contoursSimple;
    findContours(output, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE); //find only external contours
    for (int i=0; i<contours.size(); i++)
        if (contourArea(contours[i]) > averagePixelsPerObject/4.0)
            contoursOnlyBig.push_back(contours[i]); //add only contours with area > averagePixelsPerObject/4 ---> skip small contours (noise)

    Mat bigContoursOnly = Mat::zeros(output.size(), output.type());
    Mat allContours = bigContoursOnly.clone();
    drawContours(allContours, contours, -1, CV_RGB(255, 255, 255), -1);
    drawContours(bigContoursOnly, contoursOnlyBig, -1, CV_RGB(255, 255, 255), -1);
    //imshow("all contours", allContours);
    output = bigContoursOnly;

    nonZero = countNonZero(output); //not realy important - just for tests
    if (imageIndex != -1)
        cout << "non zero: " << nonZero << ", average pixels per object: " << nonZero/goodResults[imageIndex] << " objects: "  << goodResults[imageIndex] << endl;
    else
        cout << "non zero: " << nonZero << endl;

    for (int i=0; i<contoursOnlyBig.size(); i++)
    {
        double area = contourArea(contoursOnlyBig[i]);
        if (area >= singleObjectPixelsCountMin && area <= singleObjectPixelsCountMax) //is this contours a single grain ?
        {
            contoursSimple.push_back(contoursOnlyBig[i]);
            objectsCount++;
        }
        else
        {
            if (area >= doubleObjectPixelsCountMin && area <= doubleObjectPixelsCountMax) //is this contours a double grain ?
            {
                contoursSimple.push_back(contoursOnlyBig[i]);
                objectsCount+=2;
            }
            else
                contoursWithoutSimpleObjects.push_back(contoursOnlyBig[i]); //group of grainss
        }
    }

    cout << "founded single objects: " << objectsCount << endl;
    Mat thresholdedImageMask = Mat::zeros(output.size(), output.type()), simpleContoursMat = Mat::zeros(output.size(), output.type());
    drawContours(simpleContoursMat, contoursSimple, -1, CV_RGB(255, 255, 255), -1);
    if (contoursWithoutSimpleObjects.size())
        drawContours(thresholdedImageMask, contoursWithoutSimpleObjects, -1, CV_RGB(255, 255, 255), -1); //draw only contours of groups of grains
    imshow("simpleContoursMat", simpleContoursMat);
    imshow("thresholded image mask", thresholdedImageMask);
    Mat finalResult;
    thresholded.copyTo(finalResult, thresholdedImageMask); //copy using mask - only pixels whc=ich belongs to groups of grains will be copied
    //imshow("finalResult", finalResult);
    nonZero = countNonZero(finalResult); // count number of pixels in all gropus of grains (of course without single or double grains)
    int goodObjectsLeft = goodResults[imageIndex]-objectsCount;
    if (imageIndex != -1)
        cout << "non zero: " << nonZero << ", average pixels per object: " << (goodObjectsLeft ? (nonZero/goodObjectsLeft) : 0) << " objects left: " << goodObjectsLeft <<  endl;
    else
        cout << "non zero: " << nonZero << endl;
    objectsCount += round((float)nonZero/(float)averagePixelsPerObject);

    if (imageIndex != -1)
    {
        error = objectsCount-goodResults[imageIndex];
        cout << "final objects count: " << objectsCount << ", should be: " << goodResults[imageIndex] << ", error is: " << error <<  endl;
    }
    else
        cout << "final objects count: " << objectsCount << endl; 
    return output;
}

int main(int argc, char* argv[])
{
    string fileName = "A";
    int totalError = 0, error;
    bool fastProcessing = true;
    vector<int> errors;

    if (argc > 1)
    {
        Mat m = imread(argv[1]);
        imshow("image", m);
        processImage(m, -1, error);
        waitKey(-1);
        return 0;
    }

    while(true)
    {
        Mat m = imread("images\\" + fileName + ".jpg");
        cout << "Processing image: " << fileName << endl;
        imshow("image", m);
        processImage(m, fileName[0] - 'A', error);
        totalError += abs(error);
        errors.push_back(error);
        if (!fastProcessing && waitKey(-1) == 'q')
            break;
        fileName[0] += 1;
        if (fileName[0] > 'J')
        {
            if (fastProcessing)
                break;
            else
                fileName[0] = 'A';
        }
    }
    cout << "Total error: " << totalError << endl;
    cout << "Errors: " << (Mat)errors << endl;
    cout << "averagePixelsPerObject:" << averagePixelsPerObject << endl;

    return 0;
}

如果要查看所有步骤的结果,请取消注释所有imshow(..,..)函数调用,并将fastProcessing变量设置为false。图片(A.jpg,B.jpg,...)应位于目录图片中。当然,您也可以从命令行中指定一个图像的名称作为参数。

当然,如果不清楚,我可以解释一下和/或提供一些图像/信息。


12

C#+ OpenCvSharp,得分:71

这是最令人烦恼的,我试图找到一种使用分水岭实际上可以识别每种谷物的解决方案,但我只是。不能。得到。它。至。工作。

我确定了一个解决方案,该解决方案至少要分离一些单独的晶粒,然后使用这些晶粒来估计平均晶粒尺寸。但是到目前为止,我还不能击败具有硬编码粒度的解决方案。

因此,此解决方案的主要亮点是:它不假定谷物的像素大小固定,即使移动相机或更改米的类型也应可以使用。

A.jpg; 粒数:3;预期3;错误0; 每粒像素:2525,0;
B.jpg; 粒数:7;预期5;错误2; 每粒像素:1920,0;
C.jpg; 粒数:6;预期为12;错误6; 每粒像素:4242,5;
D.jpg; 粒数:23;预期25; 错误2; 每粒像素:2415,5;
E.jpg; 谷物数量:47;预期50; 错误3; 每粒像素:2729,9;
F.jpg; 粒数:65;预期83; 错误18; 每粒像素:2860,5;
G.jpg; 粒数:120;预期120; 错误0; 每粒像素:2552,3;
H.jpg; 谷物数量:159;预期150;错误9; 每粒像素:2624,7;
I.jpg; 粒数:141;预期151; 错误10; 每粒像素:2697,4;
J.jpg; 谷物数量:179;预期200; 错误21; 每粒像素:2847,1;
总错误:71

我的解决方案是这样的:

通过将图像转换为HSV并在饱和通道上应用Otsu阈值来分离前景。这很简单,效果非常好,我建议所有想要尝试此挑战的人都这样做:

saturation channel                -->         Otsu thresholding

在此处输入图片说明 -> 在此处输入图片说明

这样可以干净地删除背景。

然后,通过对值通道应用固定的阈值,我还从前景中去除了谷物阴影。(不确定是否真的有很大帮助,但是添加起来很简单)

在此处输入图片说明

然后我在前景图像上应用距离变换

在此处输入图片说明

并在此距离变换中找到所有局部最大值。

这是我的主意破裂的地方。为了避免在同一谷物中出现多个局部最大值,我必须进行大量过滤。目前,我只在45像素半径内保留最强的最大值,这意味着并非每个谷物都具有局部最大值。而且我对45像素半径的确没有理由,这只是一个可行的值。

在此处输入图片说明

(如您所见,这些种子不足以说明每种谷物)

然后,我将那些最大值用作分水岭算法的种子:

在此处输入图片说明

结果是。我希望大多数谷物都是单个谷物,但团块仍然太大。

现在,我确定最小的斑点,计算它们的平均像素大小,然后据此估计颗粒数。这不是我一开始打算做的事情,但这是挽救这个问题的唯一方法。

使用系统; 
使用系统集合通用; 
使用系统临q ; 
使用系统文字; 
使用OpenCvSharp ;

命名空间GrainTest2 { 节目{ 静态无效主要字符串[] ARGS { [] 文件= [] { “sourceA.jpg” “sourceB.jpg” “sourceC.jpg” “sourceD.jpg” , sourceE.jpg” “ sourceF.jpg” “ sourceG.jpg” “ sourceH.jpg” “ sourceI.jpg” “ sourceJ.jpg” };int [] 预期谷物

     
    
          
        
             
                               
                                     
                                     
                                      
                               
            = [] { 3 5 12 25 50 83 120 150 151 200 ,};          

            int totalError = 0 ; int totalPixels = 0 ; 
             

            INT FILENO = 0 ; FILENO标记= 列表(); 
                    使用CvMemStorage 存储= CvMemStorage ())
                    使用CvContourScanner 扫描器= CvContourScanner localMaxima 存储CvContour 一下SizeOf ContourRetrieval 外部ContourChain ApproxNone ))         
                    { //将每个局部最大值设置为种子数25、35、45 ,... //(实际数字无关紧要,为更好地在png中选择可见性而选择)int markerNo = 20 ; 的foreach CvSeq Ç 扫描器{ 
                            markerNo + = 5 ; 
                            标记添加markerNo ); 
                            waterShedMarkers DrawContours c CvScalar markerNo ),
                        
                        
                         
                         
                             CvScalar markerNo ),0 - 1 ); } } 
                    waterShedMarkers SaveImage “ 08-watershed-seeds.png” );  
                        
                    


                    来源分水岭waterShedMarkers ); 
                    waterShedMarkers SaveImage “ 09-watershed-result.png” );


                    List pixelPerBlob = new List ();  

                    // Terrible hack because I could not get Cv2.ConnectedComponents to work with this openCv wrapper
                    // So I made a workaround to count the number of pixels per blob
                    waterShedMarkers.ConvertScale(waterShedThreshold);
                    foreach (int markerNo in markers)
                    {
                        using (CvMat tmp = new CvMat(waterShedMarkers.Rows, waterShedThreshold.Cols, MatrixType.U8C1))
                        {
                            waterShedMarkers.CmpS(markerNo, tmp, ArrComparison.EQ);
                            pixelsPerBlob.Add(tmp.CountNonZero());

                        }
                    }

                    // estimate the size of a single grain
                    // step 1: assume that the 10% smallest blob is a whole grain;
                    double singleGrain = pixelsPerBlob.OrderBy(p => p).ElementAt(pixelsPerBlob.Count/15);

                    // step2: take all blobs that are not much bigger than the currently estimated singel grain size
                    //        average their size
                    //        repeat until convergence (too lazy to check for convergence)
                    for (int i = 0; i  p  Math.Round(p/singleGrain)).Sum());

                    Console.WriteLine("input: {0}; number of grains: {1,4:0.}; expected {2,4}; error {3,4}; pixels per grain: {4:0.0}; better: {5:0.}", file, numGrains, expectedGrains[fileNo], Math.Abs(numGrains - expectedGrains[fileNo]), singleGrain, pixelsPerBlob.Sum() / 1434.9);

                    totalError += Math.Abs(numGrains - expectedGrains[fileNo]);
                    totalPixels += pixelsPerBlob.Sum();

                    // this is a terrible hack to visualise the estimated number of grains per blob.
                    // i'm too tired to clean it up
                    #region please ignore
                    using (CvMemStorage storage = new CvMemStorage())
                    using (CvMat tmp = waterShedThreshold.Clone())
                    using (CvMat tmpvisu = new CvMat(source.Rows, source.Cols, MatrixType.S8C3))
                    {
                        foreach (int markerNo in markers)
                        {
                            tmp.SetZero();
                            waterShedMarkers.CmpS(markerNo, tmp, ArrComparison.EQ);
                            double curGrains = tmp.CountNonZero() * 1.0 / singleGrain;
                            using (
                                CvContourScanner scanner = new CvContourScanner(tmp, storage, CvContour.SizeOf, ContourRetrieval.External,
                                                                                ContourChain.ApproxNone))
                            {
                                tmpvisu.Set(CvColor.Random(), tmp);
                                foreach (CvSeq c in scanner)
                                {
                                    //tmpvisu.DrawContours(c, CvColor.Random(), CvColor.DarkGreen, 0, -1);
                                    tmpvisu.PutText("" + Math.Round(curGrains, 1), c.First().Value, new CvFont(FontFace.HersheyPlain, 2, 2),
                                                    CvColor.Red);
                                }

                            }


                        }
                        tmpvisu.SaveImage("10-visu.png");
                        tmpvisu.SaveImage("10-visu" + file + ".png");
                    }
                    #endregion

                }

            }
            Console.WriteLine("total error: {0}, ideal Pixel per Grain: {1:0.0}", totalError, totalPixels*1.0/expectedGrains.Sum());

        }
    }
}

使用硬编码的每像素像素大小2544.4进行的小型测试显示总误差为36,仍比大多数其他解决方案大。

在此处输入图片说明 在此处输入图片说明 在此处输入图片说明 在此处输入图片说明


我认为您可以在距离转换的结果上使用一些较小值的阈值(也可以使用腐蚀操作)-这应将一些晶粒分成较小的组(最好-仅包含1或2个晶粒)。比计算那些孤独的谷物要容易得多。您可以在这里算是大多数人的大群体-用面积除以单粒平均面积。
cyriel 2014年

9

HTML + Javascript:得分39

确切的值是:

Estimated | Actual
        3 |      3
        5 |      5
       12 |     12
       23 |     25
       51 |     50
       82 |     83
      125 |    120
      161 |    150
      167 |    151
      223 |    200

较大的值会分解(不准确)。

window.onload = function() {
  var $ = document.querySelector.bind(document);
  var canvas = $("canvas"),
    ctx = canvas.getContext("2d");

  function handleFileSelect(evt) {
    evt.preventDefault();
    var file = evt.target.files[0],
      reader = new FileReader();
    if (!file) return;
    reader.onload = function(e) {
      var img = new Image();
      img.onload = function() {
        canvas.width = this.width;
        canvas.height = this.height;
        ctx.drawImage(this, 0, 0);
        start();
      };
      img.src = e.target.result;
    };
    reader.readAsDataURL(file);
  }


  function start() {
    var imgdata = ctx.getImageData(0, 0, canvas.width, canvas.height);
    var data = imgdata.data;
    var background = 0;
    var totalPixels = data.length / 4;
    for (var i = 0; i < data.length; i += 4) {
      var red = data[i],
        green = data[i + 1],
        blue = data[i + 2];
      if (Math.abs(red - 197) < 40 && Math.abs(green - 176) < 40 && Math.abs(blue - 133) < 40) {
        ++background;
        data[i] = 1;
        data[i + 1] = 1;
        data[i + 2] = 1;
      }
    }
    ctx.putImageData(imgdata, 0, 0);
    console.log("Pixels of rice", (totalPixels - background));
    // console.log("Total pixels", totalPixels);
    $("output").innerHTML = "Approximately " + Math.round((totalPixels - background) / 2670) + " grains of rice.";
  }

  $("input").onchange = handleFileSelect;
}
<input type="file" id="f" />
<canvas></canvas>
<output></output>

说明:基本上,计算米像素的数量,然后将其除以每个谷物的平均像素。


使用3米图像,我估计为0 ...:/
Kroltan

1
@Kroltan使用全尺寸图片时不行。
卡尔文的爱好2014年

1
Windows上的@ Calvin'sHobbies FF36使用全尺寸图片获取0,在Ubuntu上获取3。
Kroltan

4
@BobbyJack保证大米在图像上的大小大致相同。我认为没有问题。
卡尔文的爱好2014年

1
@githubphagocyte-一个非常明显的解释-如果您对图像的二值化结果中的所有白色像素进行计数,然后将该数字除以图像中的颗粒数量,您将得到此结果。当然,确切的结果可能会有所不同,这是因为使用了二值化方法和其他东西(例如在二值化之后执行的操作),但是如您在其他答案中所看到的,结果将在2500-3500范围内。
cyriel 2014年

4

尝试使用php,不是得分最低的答案,而是相当简单的代码

得分:31

<?php
for($c = 1; $c <= 10; $c++) {
  $a = imagecreatefromjpeg("/tmp/$c.jpg");
  list($width, $height) = getimagesize("/tmp/$c.jpg");
  $rice = 0;
  for($i = 0; $i < $width; $i++) {
    for($j = 0; $j < $height; $j++) {
      $colour = imagecolorat($a, $i, $j);
      if (($colour & 0xFF) < 95) $rice++;
    }
  }
  echo ceil($rice/2966);
}

自我评分

95是一个蓝色值,当使用GIMP 2966进行测试时,似乎是正常的平均晶粒尺寸

By using our site, you acknowledge that you have read and understand our Cookie Policy and Privacy Policy.
Licensed under cc by-sa 3.0 with attribution required.