728x90

activation function 2

[Deep Learning] ํ”„๋ ˆ์ž„์›Œํฌ ํ™•์žฅ ์ฝ”๋“œ ๊ตฌํ˜„

์ด์ „ deep learning ํด๋ž˜์Šค ์ฝ”๋“œ ๊ตฌํ˜„๊ณผ ์ด์–ด์ง€๋Š” ๋‚ด์šฉ์ด๋ฏ€๋กœ, ์ด์ „ ๊ธ€ ๋จผ์ € ํ™•์ธํ•˜๋Š” ๊ฒƒ์ด ์ข‹์Šต๋‹ˆ๋‹ค. https://heejins.tistory.com/36 float: # ๊ฐ ํ–‰(๊ด€์ฐฐ์— ํ•ด๋‹น)์— softmax ํ•จ์ˆ˜ ์ ์šฉ softmax_preds = softmax(self.prediction, axis = 1) # ์†์‹ค๊ฐ’์ด ๋ถˆ์•ˆ์ •ํ•ด์ง€๋Š” ๊ฒƒ์„ ๋ง‰๊ธฐ ์œ„ํ•ด softmax ํ•จ์ˆ˜์˜ ์ถœ๋ ฅ๊ฐ’ ๋ฒ”์œ„๋ฅผ ์ œํ•œ self.softmax_preds = np.clip(softmax_preds, self.eps, 1 - self.eps) # ์‹ค์ œ ์†์‹ค๊ฐ’ ๊ณ„์‚ฐ ์ˆ˜ํ–‰ softmax_cross_entropy_loss = ( -1.0 * self.target * np.log(self.softmax_preds) - (1.0 - s..

Deep Learning 2022.11.24

[Deep Learning]ํ™œ์„ฑํ™” ํ•จ์ˆ˜ ๊ตฌํ˜„(activation function)

๊ณ„์‚ฐ ๊ทธ๋ž˜ํ”„ ๊ณ„์‚ฐ ๊ณผ์ •์„ ๊ทธ๋ž˜ํ”„๋กœ ๋‚˜ํƒ€๋‚ธ ๊ฒƒ 1. ๊ณ„์‚ฐ ๊ทธ๋ž˜ํ”„๋ฅผ ๊ตฌ์„ฑํ•œ๋‹ค. 2. ๊ทธ๋ž˜ํ”„์—์„œ ๊ณ„์‚ฐ์„ ์™ผ์ชฝ์—์„œ ์˜ค๋ฅธ์ชฝ์œผ๋กœ ์ง„ํ–‰ํ•œ๋‹ค. - ๊ณ„์‚ฐ์„ ์™ผ์ชฝ์—์„œ ์˜ค๋ฅธ์ชฝ์œผ๋กœ ์ง„ํ–‰ํ•˜๋Š” ๋‹จ๊ณ„: ์ˆœ์ „ํŒŒ - ์˜ค๋ฅธ์ชฝ์—์„œ ์™ผ์ชฝ์˜ ์ „ํŒŒ: ์—ญ์ „ํŒŒ ๊ตญ์†Œ์  ๊ณ„์‚ฐ - ๊ตญ์†Œ์ : ์ž์‹ ๊ณผ ์ง์ ‘ ๊ด€๊ณ„๋œ ์ž‘์€ ๋ฒ”์œ„ - ๊ตญ์†Œ์  ๊ณ„์‚ฐ์€ ๊ฒฐ๊ตญ ์ „์ฒด์—์„œ ์–ด๋–ค ์ผ์ด ๋ฒŒ์–ด์ง€๋“  ์ƒ๊ด€์—†์ด ์ž์‹ ๊ณผ ๊ด€๊ณ„๋œ ์ •๋ณด๋งŒ์œผ๋กœ ๊ฒฐ๊ณผ๋ฅผ ์ถœ๋ ฅํ•  ์ˆ˜ ์žˆ์Œ. ์—ฐ์‡„๋ฒ•์น™ ์—ฌ๋Ÿฌํ•จ์ˆ˜๋กœ ๊ตฌ์„ฑ๋œ ํ•ฉ์„ฑ ํ•จ์ˆ˜์˜ ๋ฏธ๋ถ„์— ๋Œ€ํ•œ ์„ฑ์งˆ - ์—ฐ์‡„๋ฒ•์น™์€ ํ•ฉ์„ฑ ํ•จ์ˆ˜์˜ ๋ฏธ๋ถ„์— ๋Œ€ํ•œ ์„ฑ์งˆ์ด๋ฉฐ, ํ•ฉ์„ฑ ํ•จ์ˆ˜์˜ ๋ฏธ๋ถ„์€ ํ•ฉ์„ฑ ํ•จ์ˆ˜๋ฅผ ๊ตฌ์„ฑํ•˜๋Š” ๊ฐ ํ•จ์ˆ˜์˜ ๋ฏธ๋ถ„์˜ ๊ณฑ์œผ๋กœ ๋‚˜ํƒ€๋‚ผ ์ˆ˜ ์žˆ์Œ. - ๋ง์…ˆ ๋…ธ๋“œ์˜ ์—ญ์ „ํŒŒ( z = x + y ) ์—ญ์ „ํŒŒ ๋•Œ๋Š” ์ƒ๋ฅ˜์—์„œ ์ •ํ•ด์ง„ ๋ฏธ๋ถ„์— 1์„ ๊ณฑํ•˜์—ฌ ํ•˜๋ฅ˜๋กœ ํ˜๋ฆผ. ์ฆ‰, ๋ง์…ˆ ๋…ธ๋“œ์˜ ์—ญ์ „ํŒŒ๋Š” ..

Deep Learning 2022.11.07
728x90