2014年7月16日 星期三

[Gossiping]Fw: [新聞] JVC回函澄清:蔣偉寧是無辜受害者之一 by bmka (偶素米蟲)




抄襲 或者是 亂掛名
反正兩者都違反學術倫理
蔣部長就自己選一條罪名吧


※ [本文轉錄自 AfterPhD 看板 #1JnQjBQ1 ]

作者: bmka (偶素米蟲) 看板: AfterPhD
標題: Re: [新聞] JVC回函澄清:蔣偉寧是無辜受害者之一
時間: Wed Jul 16 06:29:29 2014



我希望科技部把蔣前部長這兩篇文章印出來比對一下

文章A:
Chen, Chen-Wu, Po-Chen Chen, and Wei-Ling Chiang.
"Modified intelligent genetic algorithm-based
adaptive neural network control for uncertain structural systems."
Journal of Vibration and Control 19.9 (2013): 1333-1347.

文章B:
Chen, C. W., P. C. Chen, and W. L. Chiang.
"Stabilization of adaptive neural network controllers for nonlinear
structural systems using a singular perturbation approach."
Journal of Vibration and Control 17.8 (2011): 1241-1252.

很明顯*至少*是self-plagiarism (這也是違反學術倫理的抄襲)
蔣前部長不要再說自己沒抄襲了啦
臉會很腫的


因為數學式子難顯示, 我只節錄這兩篇paper的Introduction的幾個(連續)段落供比較

文章A:
...Many NN systems, which are essentially intelligent inference systems
implemented in the framework of adaptive networks, have been
developed to model or control nonlinear plants with remarkable results.
The desired performance can be obtained with fewer adjustable
parameters, although sometimes more training is required to achieve
the higher accuracy derived from the transfer function and the learning
algorithm. In addition to these features, NNs also act as a universal
approximator (Hartman et al., 1990; Funahashi and Nakamura, 1993)
where the feedforward network isvery important. A backpropagation
algorithm (Hecht-Nielsen, 1989; Ku and Lee, 1995), is usually used in
the feedforward type of NN but heavy and complicated learning is
needed to tune each network weight. Aside from the backpropagation
type of NN, another common feedforward NN is the radial basis function
network (RBFN) (Powell, 1987, 1992; Park and Sandberg, 1991).


文章B:
...Many NN systems, which are essentially intelligent inference systems
implemented in the framework of adaptive networks, have been
developed to model or control nonlinear plants, with remarkable results.
The desired performance can be obtained with fewer adjustable
parameters, although sometimes more training derived from the
transfer function and the learning algorithm is needed to achieve
sufficient accuracy. In addition, NN also acts as a universal approximator
so the feedforward network is very important (Hartman et al., 1990;
Funahashi and Nakamura, 1993). A backpropagation algorithm is usually
used in the feedforward type of NN, but this necessitates heavy and
complicated learning to tune each network weight (Hecht-Nielsen, 1989;
Ku and Lee, 1995). Besides the backpropagation type of NN, another
common feedforward NN is the radial basis function network (RBFN)
(Powell, 1987, 1992; Park and Sandberg, 1991).



文章A:
RBFNs use only one hidden layer. The transfer function of the hidden
layer is a nonlinear semi-affine function. Obviously, the learning rate
of the RBFN will be faster than that of the backpropagation network.
Furthermore, the RBFN can approximate any nonlinear continuous
function and eliminate local minimum problems (Powell, 1987, 1992;
Park and Sandberg, 1991). These features mean that the RBFN is
usually used for real-time control in nonlinear dynamic systems.
Some results indicate that, under certain mild function conditions,
the RBFN is capable of universal approximations (Park and Sandberg,
1991; Powell, 1992).


文章B:
The RBFN requires the use of only one hidden layer, and the transfer
function for the hidden layer is a nonlinear semi-affine function.
Obviously, the learning rate will be faster than that of the backpropagation
network. Furthermore, one can approximate any nonlinear continuous
function and eliminate local minimum problems with this method
(Powell, 1987, 1992; Park and Sandberg, 1991). Because of these features,
this technique is usually used for real-time control in nonlinear dynamic
systems. Some results indicate that, under certain mild function conditions,
the RBFN is even capable of universal approximations (Park and Sandberg,
1991; Powell, 1992).


文章A:
Adaptive algorithms can be utilized to find the best high-performance
parameters for the NN (Goodwin and Sin, 1984; Sanner and Slotine, 1992).
Adaptive laws have been designed for the Lyapunov synthesis approach
to tune the adjustable parameters of the RBFN, and analyze the stability
of the overall system. A genetic algorithm (GA) (Goldberg, 1989; Chen,
1998), is the usual optimization technique used in the self-learning or
training strategy to decide the initial values of the parameter vector.
This GA-based modified adaptive neural network controller (MANNC)
should improve the immediate response, the stability, and the robustness
of the control system


文章B:
Adaptive algorithms can be utilized to find the best high-performance
parameters for the NN. The adaptive laws of the Lyapunov synthesis
approach are designed to tune the adjustable parameters of the RBFN,
and analyze the stability of the overall system. A genetic algorithm (GA)
is the usual optimization technique used in the self-learning or training
strategy to decide the initial values included in the parameter vector
(Goldberg, 1989; Chen, 1998). The use of a GA-based adaptive neural
network controller (ANNC) should improve the immediate response,
stability, and robustness of the control system.


文章A:
Another common problem encountered when switching the control
input of the sliding model system is the so-called "chattering" phenomenon.
The smoothing of control discontinuity inside a thin boundary layer
essentially acts as a low-pass filter structure for the local dynamics, thus
eliminating chattering (Utkin, 1978; Khalil, 1996). The laws are updated
by the introduction of a boundary-layer function to cover parameter errors
and modeling errors, and to guarantee that the state errors converge
within a specified error bound.


文章B:
Another common problem encountered when switching the control
input of the sliding model system is the so-called “chattering” phenomenon.
Sometimes the smoothing of control discontinuity inside a thin boundary layer
essentially acts as a low-pass filter structure for the local dynamics, thus
eliminating chattering (Utkin, 1978; Khalil, 1996). The laws for this process
are updated by the introduction of a boundary-layer function to cover
parameter errors and modeling errors. This also guarantees that the
state errors converge within a specified error bound.



這不是抄襲,什麼才是抄襲?

延伸閱讀: The ethics of self-plagiarism
http://cdn2.hubspot.net/hub/92785/file-5414624-pdf/media/ith-selfplagiarism-whitepaper.pdf

Self-Plagiarism is defined as a type of plagiarism in which
the writer republishes a work in its entirety or reuses portions
of a previously written text while authoring a new work.


--
※ 文章網址: http://www.ptt.cc/bbs/AfterPhD/M.1405463371.A.681.html
jhyen:其他的不要說,光這60篇被JVC退的找出來看就很精彩....... 07/16 06:39
bmka:第二篇沒在這被查出的60篇裡面喔!看來未爆彈還很多 07/16 07:23
※ 編輯: bmka (68.49.100.176), 07/16/2014 07:59:12
MyDice:科技部不會查這些 只能向JVC反應了 07/16 08:10
wacomnow:推用心!記者快來抄呀 07/16 08:19
WTFCAS:鍵盤又輸入錯誤了… 07/16 08:57
flashegg:第二篇(2011較早的這篇)沒在這被查出的60篇裡面 07/16 10:42
flashegg:表示有可能是經過真的學者審查通過的吧? 07/16 10:42
flashegg:然後2013這篇因為self-plagiarism,所以不敢被審? 07/16 10:43
flashegg:才套假帳號然後被JVC接受刊出,以上是個人看法 07/16 10:44
bmka:那就要問蔣偉寧了..他只能抄襲跟完全沒看過paper二選一了 07/16 10:49
flashegg:再來CW Chen可以辯稱2013這篇是2011的續作 07/16 10:50
bmka:我猜應該還有其他的paper是套用同一個模板寫出來的 07/16 10:50
flashegg:因為同樣受到蔣偉寧指導才把老師掛上去 07/16 10:50
flashegg:重複段落多也可以用自己便宜行事出面道歉了事 07/16 10:51
bmka:就算是續作,也不可以self-plagiarism,這是常識吧 07/16 10:51
flashegg:總之這種self-plagiarism在理工科paper不是沒有見過 07/16 10:51
flashegg:最後也是被系/院教評會發還重審,不了了之 07/16 10:52
bmka:抄襲就是抄襲,學術界自有評論 :) 07/16 10:54
flashegg:而且要是CW Chen出來坦,說沒經過蔣同意就把老師掛上去 07/16 10:55
flashegg:純粹只是因為受過老師指導、或尊重老師等等 07/16 10:55
flashegg:還怕蔣不能安全下莊嗎?這也是個人看法~ 07/16 10:55
bmka:不告知掛個一篇那也就罷了,這麼多年來掛了一堆,還不知被掛 07/16 10:56
bmka:然後CV上還大大方方的登錄...很難說得過去的 07/16 10:57
bmka:其實我的猜測是蔣偉寧根本沒看過這些文章(貢品),只是他不敢 07/16 10:58
bmka:承認這些不是他的research,他違反學術倫理掛了名 07/16 10:58
bmka:但是敢收學生的貢品就要敢扛啊,不能出事就推給學生 07/16 10:59
flashegg:這就是在道德操守與人性上打轉啦 07/16 11:03
flashegg:假設CW Chen真的是在蔣不知情的狀況下把老師掛上去 07/16 11:03
flashegg:paper被接受之後才跟老師說有掛名一事 07/16 11:04
flashegg:有多少老師會說,不行你馬上把我的名字撤掉? 07/16 11:04
flashegg:我想還是會欣然接受的人比較多吧,還會覺得學生懂事呢 07/16 11:05
bmka:那還是蔣的錯,正常的處理方式應該是嚴正警告學生不可以如此做 07/16 11:05
bmka:這種事以後不可以發生 07/16 11:05
flashegg:我並非贊同蔣的行為,只是想說這種事真的是屢見不鮮 07/16 11:07
bmka:學術圈很小,自己的名聲自己顧,何況是像蔣這種大咖 07/16 11:07
flashegg:學術界不能說的秘密,挖下去恐怕粽子一大串 07/16 11:07
bmka:我也了解屢見不鮮,但是敢做,出了事就別想卸責,如此而已 07/16 11:08
bmka:要不是蔣一直卸責,我也懶得浪費時間看他們的廢文(越看越氣) 07/16 11:19
bmka:還有,蔣也未免太饑不擇食,這種三流期刊的paper也要掛 07/16 11:23
tainanuser:推,很用心! 07/16 11:42
MyDice:可以從科技部或是蔣的網頁看到他2010年以來的publication 07/16 12:05
MyDice:有多少嗎? 尤其是當校長部長這段期間論文任意掛名的情況有 07/16 12:07
MyDice:多嚴重 07/16 12:07
ceries:厲害! 07/16 14:53
jabari:請問這個可以推給學運嗎? 還是八年遺毒?? 07/16 16:27
jack5756:真的都是學運的錯,而且很多Paper是八年遺毒 07/16 17:09

※ 發信站: 批踢踢實業坊(ptt.cc)
※ 轉錄者: bmka (68.49.100.176), 07/16/2014 18:50:53
※ 編輯: bmka (68.49.100.176), 07/16/2014 18:57:19
MIT8818:這內容能看出他無辜? 07/16 18:57
soultakerna:居然直接複製貼上XD 07/16 18:57
bmka:他不無辜啊,蔣部長的文章是抄襲而且是self-plagiarism 07/16 18:58
bmka:這點是賴不掉的 07/16 18:58
soultakerna:有改那麼一點點的樣子lol 07/16 18:58

改一點點還是抄襲,請google 抄襲的定義

soria:唷,自我抄襲嗎? 07/16 18:59
soultakerna:這幾段有reference嗎,抓不到原文 07/16 19:03
soultakerna:我知道改一點點也是抄啦 07/16 19:04
※ 編輯: bmka (68.49.100.176), 07/16/2014 19:07:24
soria:我知道他為什麼第一天急著撇清了,因為這些問題肯定越挖越多 07/16 19:08
bmka:殭屍審查就是用在讓這種明顯有問題的文章蒙混過關 07/16 19:09
walei98:高調 07/16 19:15
offish:沒空細看,先高調 07/16 19:19
soria:魔鬼就藏在細節裡面 07/16 19:37
loki1789:高調 07/16 20:16
honeyombd:魔鬼就藏在細節裡 07/16 20:19
honeyombd:今天又為了這髒東西跟家裡9.2吵架= = 07/16 20:31
soria:建議原po換一下標題,這樣比較清楚。 07/16 20:32
woulin:CW Chen沒這樣搞..一年20幾篇論文怎麼生出來 07/16 21:12
reil:追查下去連教授資格都沒了 07/16 23:09






沒有留言:

張貼留言