Warning: mysqli::__construct(): (HY000/1203): User howardkn already has more than 'max_user_connections' active connections in D:\Inetpub\vhosts\howardknight.net\al.howardknight.net\includes\artfuncs.php on line 21
Failed to connect to MySQL: (1203) User howardkn already has more than 'max_user_connections' active connectionsPath: ...!fu-berlin.de!uni-berlin.de!not-for-mail
From: Thomas Passin
Newsgroups: comp.lang.python
Subject: Re: Predicting an object over an pretrained model is not working
Date: Tue, 30 Jul 2024 15:25:39 -0400
Lines: 237
Message-ID:
References:
<263356ef-7ad8-4abc-9940-bd8536ee13eb@tompassin.net>
Mime-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
X-Trace: news.uni-berlin.de hx1vTBTEcjS+XqfRNzNrYgLXG+0uKqH7v75oGJRNZhog==
Cancel-Lock: sha1:n0tNegIomZBlfLNRvPhJkrt/q40= sha256:xHqFLMkN5giwE6wjhQ9IjsmBzzTyZIA47u1QYlHscwo=
Return-Path:
X-Original-To: python-list@python.org
Delivered-To: python-list@mail.python.org
Authentication-Results: mail.python.org; dkim=pass
reason="2048-bit key; unprotected key"
header.d=tompassin.net header.i=@tompassin.net header.b=ow769Nsy;
dkim-adsp=pass; dkim-atps=neutral
X-Spam-Status: OK 0.002
X-Spam-Evidence: '*H*': 1.00; '*S*': 0.00; 'def': 0.04; 'knows': 0.04;
'image.': 0.07; '"""': 0.09; 'cell.': 0.09; 'code?': 0.09;
'compute': 0.09; 'coordinate': 0.09; 'output:': 0.09; 'skip:x 10':
0.09; 'subject:not': 0.09; 'tensorflow': 0.09; 'threshold': 0.09;
'(1,': 0.16; 'annotated': 0.16; 'args:': 0.16; 'box.': 0.16;
'dict': 0.16; 'input.': 0.16; 'possible?': 0.16; 'predicts': 0.16;
'received:10.0.0': 0.16; 'received:64.90': 0.16;
'received:64.90.62': 0.16; 'received:64.90.62.162': 0.16;
'received:dreamhost.com': 0.16; 'skip:5 20': 0.16;
'subject:model': 0.16; 'subject:working': 0.16; 'wrote:': 0.16;
'problem': 0.16; 'figure': 0.19; 'implement': 0.19; 'pm,': 0.19;
'to:addr:python-list': 0.20; 'all,': 0.20; 'input': 0.21; 'skip:_
10': 0.22; 'code': 0.23; 'skip:p 30': 0.23; 'classes': 0.26;
'object': 0.26; 'else': 0.27; 'expect': 0.28; 'output': 0.28;
'header:User-Agent:1': 0.30; 'code,': 0.31; 'objects': 0.32;
'python-list': 0.32; 'received:10.0': 0.32;
'received:mailchannels.net': 0.32;
'received:relay.mailchannels.net': 0.32; 'skip:2 10': 0.32;
'able': 0.34; 'skip:" 20': 0.34; 'header:In-Reply-To:1': 0.34;
'trying': 0.35; 'skip:2 20': 0.35; 'following': 0.35; 'fix': 0.36;
'target': 0.36; 'source': 0.36; "skip:' 10": 0.37; 'using': 0.37;
'class': 0.37; 'two': 0.39; 'added': 0.39; 'single': 0.39;
'(with': 0.39; 'processed': 0.40; 'skip:( 30': 0.40; 'want': 0.40;
'skip:0 20': 0.61; 'skip:o 10': 0.61; 'subject': 0.63; 'skip:m
20': 0.63; 'skip:b 10': 0.63; 'skip:k 10': 0.64; 'key': 0.64;
'skip:r 20': 0.64; 'box': 0.65; 'skip:t 20': 0.66; 'skip:1 20':
0.67; 'areas': 0.67; 'maximum': 0.67; 'header:Received:6': 0.67;
'received:64': 0.67; 'order': 0.69; 'model.': 0.69; 'prediction':
0.69; 'skip:y 30': 0.69; 'skip:4 10': 0.75; 'skip:y 10': 0.76;
'detection': 0.76; 'database': 0.80; 'position': 0.81;
'subject:over': 0.84
X-Sender-Id: dreamhost|x-authsender|tpassin@tompassin.net
ARC-Seal: i=1; s=arc-2022; d=mailchannels.net; t=1722367540; a=rsa-sha256;
cv=none;
b=PzQSb4g5XAtphTtaMOyxCnDFgIuZ87y59A9m5lbbZdc7Zt6RglqTvc6imMG0EFrPkYyqWY
+I87CYKLibg7Zl2SSy30hNmfP0etlOvQ/zeqHzydYdCUpV8lxiZWgJuKA1nv+l8tHjhRvq
jUdNPZMlt6p+LfksfxzkPxpR3iHHQc7Ib0PBUNA8WOr+m4iAzjbmNlbe/Z6Hkm2KQvsyl/
qAbBvd7W8E2ydIdxsqYn0KppNdlO6ecEeZser2RgsiiEHjzWVpQfMa/KvDtQkp71IKbVQy
NhEQRj4a65XBRxHZ+s233JK3E3LoaEA03OhqKZ5euvkgvJsyVhknnPzE+6Kuaw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed;
d=mailchannels.net; s=arc-2022; t=1722367540;
h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
to:to:cc:mime-version:mime-version:content-type:content-type:
content-transfer-encoding:content-transfer-encoding:
in-reply-to:in-reply-to:references:references:dkim-signature;
bh=97DfcKHiJaULFPv+/K78VpTX0GJd6C0DRCAVGxvwXZs=;
b=WXG73xE22HCGYgZxhFkrC1fwjjJc5beRcyU0U03+mcNXJDaWo+oGGvu4fL/SoMTqcq0uLF
1Ymvg2jzERPqlGcEPLqrsmKMOiPbP3nKiNDU2wQW39n/Ai4joWxbVun2CbZJzj6QAVkd5V
50H1fZzKjsLXlomK0bv9U8gqzh8EOS0VLCs9DfKa+D/Sl2aedJFCPRGhN0fIG83WPYYK4K
RqQ66b3+4yK0WDm5GdRufR1fcXu+B0Y3OPi5FRnaeNsYsp8XzXMo6bwpJFdJS2XZk880X5
4xe5exp/8IizJm0qL8cr6CgwcodGRkyNaRNe/vfIW8X1ndTcff8UIqLmpeXAGw==
ARC-Authentication-Results: i=1; rspamd-7f77fccf7d-q2sd5;
auth=pass smtp.auth=dreamhost smtp.mailfrom=list1@tompassin.net
X-Sender-Id: dreamhost|x-authsender|tpassin@tompassin.net
X-MC-Relay: Neutral
X-MailChannels-SenderId: dreamhost|x-authsender|tpassin@tompassin.net
X-MailChannels-Auth-Id: dreamhost
X-Grain-Abaft: 605770d743393d44_1722367540900_2671915413
X-MC-Loop-Signature: 1722367540900:3290411969
X-MC-Ingress-Time: 1722367540900
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=tompassin.net;
s=dreamhost; t=1722367540;
bh=97DfcKHiJaULFPv+/K78VpTX0GJd6C0DRCAVGxvwXZs=;
h=Date:Subject:To:From:Content-Type:Content-Transfer-Encoding;
b=ow769NsykU9OZxUHi8OIq9BXTmN1gHUEoeXL/kvw7lwaswAZ5cE3FTtDNdp+jBq64
ap/SBav79kB3kEZuLDkgXfI8BbcbYM/fzeNVprM26hNdq+cFSwjQ8Fhs3kNjKq9Kc0
8xw70lL3o/hwVQImRvGcQSTKzgpmf5S7uYxri2hCxLPwrhB5c9/Q6JLqRX9zk/+RAb
mod2b0HG0Ti7Q97WBs3aM8l7EEL42YgF0wyoYUmUegN4hBBy1oT31cvaGSdHgMQEI/
SaJsLZPxHI9+Yg2BaQJGKke7sxn8acZMDZPAY3dhlFDuZwSx7eiGpInEMQibusoArh
fHD8W7dJEGGoA==
User-Agent: Mozilla Thunderbird
Content-Language: en-US
In-Reply-To:
X-BeenThere: python-list@python.org
X-Mailman-Version: 2.1.39
Precedence: list
List-Id: General discussion list for the Python programming language
List-Unsubscribe: ,
List-Archive:
List-Post:
List-Help:
List-Subscribe: ,
X-Mailman-Original-Message-ID: <263356ef-7ad8-4abc-9940-bd8536ee13eb@tompassin.net>
X-Mailman-Original-References:
Bytes: 16712
On 7/30/2024 2:18 PM, marc nicole via Python-list wrote:
> Hello all,
>
> I want to predict an object by given as input an image and want to have my
> model be able to predict the label. I have trained a model using tensorflow
> based on annotated database where the target object to predict was added to
> the pretrained model. the code I am using is the following where I set the
> target object image as input and want to have the prediction output:
>
>
>
>
>
>
>
>
> class MultiObjectDetection():
>
> def __init__(self, classes_name):
>
> self._classes_name = classes_name
> self._num_classes = len(classes_name)
>
> self._common_params = {'image_size': 448, 'num_classes':
> self._num_classes,
> 'batch_size':1}
> self._net_params = {'cell_size': 7, 'boxes_per_cell':2,
> 'weight_decay': 0.0005}
> self._net = YoloTinyNet(self._common_params, self._net_params,
> test=True)
>
> def predict_object(self, image):
> predicts = self._net.inference(image)
> return predicts
>
> def process_predicts(self, resized_img, predicts, thresh=0.2):
> """
> process the predicts of object detection with one image input.
>
> Args:
> resized_img: resized source image.
> predicts: output of the model.
> thresh: thresh of bounding box confidence.
> Return:
> predicts_dict: {"stick": [[x1, y1, x2, y2, scores1], [...]]}.
> """
> cls_num = self._num_classes
> bbx_per_cell = self._net_params["boxes_per_cell"]
> cell_size = self._net_params["cell_size"]
> img_size = self._common_params["image_size"]
> p_classes = predicts[0, :, :, 0:cls_num]
> C = predicts[0, :, :, cls_num:cls_num+bbx_per_cell] # two
> bounding boxes in one cell.
> coordinate = predicts[0, :, :, cls_num+bbx_per_cell:] # all
> bounding boxes position.
>
> p_classes = np.reshape(p_classes, (cell_size, cell_size, 1, cls_num))
> C = np.reshape(C, (cell_size, cell_size, bbx_per_cell, 1))
>
> P = C * p_classes # confidencefor all classes of all bounding
> boxes (cell_size, cell_size, bounding_box_num, class_num) = (7, 7, 2,
> 1).
>
> predicts_dict = {}
> for i in range(cell_size):
> for j in range(cell_size):
> temp_data = np.zeros_like(P, np.float32)
> temp_data[i, j, :, :] = P[i, j, :, :]
> position = np.argmax(temp_data) # refer to the class
> num (with maximum confidence) for every bounding box.
> index = np.unravel_index(position, P.shape)
>
> if P[index] > thresh:
> class_num = index[-1]
> coordinate = np.reshape(coordinate, (cell_size,
> cell_size, bbx_per_cell, 4)) # (cell_size, cell_size,
> bbox_num_per_cell, coordinate)[xmin, ymin, xmax, ymax]
> max_coordinate = coordinate[index[0], index[1], index[2], :]
>
> xcenter = max_coordinate[0]
> ycenter = max_coordinate[1]
> w = max_coordinate[2]
> h = max_coordinate[3]
>
> xcenter = (index[1] + xcenter) * (1.0*img_size /cell_size)
> ycenter = (index[0] + ycenter) * (1.0*img_size /cell_size)
>
> w = w * img_size
> h = h * img_size
========== REMAINDER OF ARTICLE TRUNCATED ==========