pycharm安装pytorch报错 提示系列问题 torch 包找不到因为pip版本低,结果升级了pip从19.3到20.2 4又提示高版不支持torch安装

pycharm安装pytorch报错 提示系列问题 torch 包找不到因为pip版本低,结果升级了pip从19.3到20.2 4又提示高版不支持torch安装有同学遇到类似的问题 https blog csdn net weixin article details ERROR Couldnotfind 1 6 0 fromversions 0 1 2 0 1 2 post1 0 1 2 post2 Lookinginind https pypi

pycharm安装pytorch报错 提示系列问题  torch 包找不到因为pip版本低,结果升级了pip从19.3到20.2 4又提示高版不支持torch安装  

 

DEPRECATION: The -b/–build/–build-dir/–build-directory option is deprecated. pip 20.3 will remove support for this functionality. A possible replacement is use the TMPDIR/TEMP/TMP environment variable, possibly combined with –no-clean. You can find discussion regarding this at https://github.com/pypa/pip/issues/8333.

如上报错来自参考

https://www.jb51.net/article/194349.htm

参考有同学遇到类似的问题的解决方案

https://blog.csdn.net/weixin_/article/details/

pycharm安装pytorch报错 提示系列问题 torch 包找不到因为pip版本低,结果升级了pip从19.3到20.2 4又提示高版不支持torch安装

 

 

 

 

ERROR: Could not find a version that satisfies the requirement torch==1.6.0 (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2)

Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple

pycharm安装pytorch报错 提示系列问题 torch 包找不到因为pip版本低,结果升级了pip从19.3到20.2 4又提示高版不支持torch安装

 

三  最终解决方案

是参考的如下的方式解决pycharm 导入pytorch问题的,真是周折

主要参考的如下连接的,将pycharm的system interceptor切换为anaconda的python.exe的环境变量

https://www.jb51.net/article/181954.htm

同时从上面的博文中也可以得出切换pycharm的环境变量的方法,可以使在pycharm中进行如下设置

之所以采用这种方案,是因为代码在本地启动的anaconda   jupyter notebook中是可以调用pytorch的,版本是1.1.0,但是在pycharm中就是安装不上,无论是1.1.0 还是0.41  1.6的版本都安装不上,分析可能是环境变量问题,

萌生了使用anaconda的环境变量 (主要是编译器)的想法,看到如上博文,参考更改了项目的解析器,代码中引用的torch包就找到了,不报找不到对应包的问题了

 

pycharm安装pytorch报错 提示系列问题 torch 包找不到因为pip版本低,结果升级了pip从19.3到20.2 4又提示高版不支持torch安装

 

实验代码为NLP 的Seq2seq代码

# code by Tae Hwan Jung(Jeff Jung) @graykode, modify by wmathor import torch import numpy as np import torch.nn as nn import torch.utils.data as Data device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') # S: Symbol that shows starting of decoding input # E: Symbol that shows starting of decoding output # ?: Symbol that will fill in blank sequence if current batch data size is short than n_step letter = [c for c in 'SE?abcdefghijklmnopqrstuvwxyz'] letter2idx = {n: i for i, n in enumerate(letter)} seq_data = [['man', 'women'], ['black', 'white'], ['king', 'queen'], ['girl', 'boy'], ['up', 'down'], ['high', 'low']] # Seq2Seq Parameter n_step = max([max(len(i), len(j)) for i, j in seq_data]) # max_len(=5) n_hidden = 128 n_class = len(letter2idx) # classfication problem batch_size = 3 #3 def make_data(seq_data): enc_input_all, dec_input_all, dec_output_all = [], [], [] for seq in seq_data: for i in range(2): seq[i] = seq[i] + '?' * (n_step - len(seq[i])) # 'man??', 'women' enc_input = [letter2idx[n] for n in (seq[0] + 'E')] # ['m', 'a', 'n', '?', '?', 'E'] dec_input = [letter2idx[n] for n in ('S' + seq[1])] # ['S', 'w', 'o', 'm', 'e', 'n'] dec_output = [letter2idx[n] for n in (seq[1] + 'E')] # ['w', 'o', 'm', 'e', 'n', 'E'] enc_input_all.append(np.eye(n_class)[enc_input]) dec_input_all.append(np.eye(n_class)[dec_input]) dec_output_all.append(dec_output) # not one-hot # make tensor return torch.Tensor(enc_input_all), torch.Tensor(dec_input_all), torch.LongTensor(dec_output_all) ''' enc_input_all: [6, n_step+1 (because of 'E'), n_class] dec_input_all: [6, n_step+1 (because of 'S'), n_class] dec_output_all: [6, n_step+1 (because of 'E')] ''' enc_input_all, dec_input_all, dec_output_all = make_data(seq_data) #4 class TranslateDataSet(Data.Dataset): def __init__(self, enc_input_all, dec_input_all, dec_output_all): self.enc_input_all = enc_input_all self.dec_input_all = dec_input_all self.dec_output_all = dec_output_all def __len__(self): # return dataset size return len(self.enc_input_all) def __getitem__(self, idx): return self.enc_input_all[idx], self.dec_input_all[idx], self.dec_output_all[idx] loader = Data.DataLoader(TranslateDataSet(enc_input_all, dec_input_all, dec_output_all), batch_size, True) #5 def make_data(seq_data): enc_input_all, dec_input_all, dec_output_all = [], [], [] for seq in seq_data: for i in range(2): seq[i] = seq[i] + '?' * (n_step - len(seq[i])) # 'man??', 'women' enc_input = [letter2idx[n] for n in (seq[0] + 'E')] # ['m', 'a', 'n', '?', '?', 'E'] dec_input = [letter2idx[n] for n in ('S' + seq[1])] # ['S', 'w', 'o', 'm', 'e', 'n'] dec_output = [letter2idx[n] for n in (seq[1] + 'E')] # ['w', 'o', 'm', 'e', 'n', 'E'] enc_input_all.append(np.eye(n_class)[enc_input]) dec_input_all.append(np.eye(n_class)[dec_input]) dec_output_all.append(dec_output) # not one-hot # make tensor return torch.Tensor(enc_input_all), torch.Tensor(dec_input_all), torch.LongTensor(dec_output_all) ''' enc_input_all: [6, n_step+1 (because of 'E'), n_class] dec_input_all: [6, n_step+1 (because of 'S'), n_class] dec_output_all: [6, n_step+1 (because of 'E')] ''' enc_input_all, dec_input_all, dec_output_all = make_data(seq_data) #6 class TranslateDataSet(Data.Dataset): def __init__(self, enc_input_all, dec_input_all, dec_output_all): self.enc_input_all = enc_input_all self.dec_input_all = dec_input_all self.dec_output_all = dec_output_all def __len__(self): # return dataset size return len(self.enc_input_all) def __getitem__(self, idx): return self.enc_input_all[idx], self.dec_input_all[idx], self.dec_output_all[idx] loader = Data.DataLoader(TranslateDataSet(enc_input_all, dec_input_all, dec_output_all), batch_size, True) #7 # Model class Seq2Seq(nn.Module): def __init__(self): super(Seq2Seq, self).__init__() self.encoder = nn.RNN(input_size=n_class, hidden_size=n_hidden, dropout=0.5) # encoder self.decoder = nn.RNN(input_size=n_class, hidden_size=n_hidden, dropout=0.5) # decoder self.fc = nn.Linear(n_hidden, n_class) def forward(self, enc_input, enc_hidden, dec_input): # enc_input(=input_batch): [batch_size, n_step+1, n_class] # dec_inpu(=output_batch): [batch_size, n_step+1, n_class] enc_input = enc_input.transpose(0, 1) # enc_input: [n_step+1, batch_size, n_class] dec_input = dec_input.transpose(0, 1) # dec_input: [n_step+1, batch_size, n_class] # h_t : [num_layers(=1) * num_directions(=1), batch_size, n_hidden] _, h_t = self.encoder(enc_input, enc_hidden) # outputs : [n_step+1, batch_size, num_directions(=1) * n_hidden(=128)] outputs, _ = self.decoder(dec_input, h_t) model = self.fc(outputs) # model : [n_step+1, batch_size, n_class] return model model = Seq2Seq().to(device) criterion = nn.CrossEntropyLoss().to(device) optimizer = torch.optim.Adam(model.parameters(), lr=0.001) #8 for epoch in range(5000): for enc_input_batch, dec_input_batch, dec_output_batch in loader: # make hidden shape [num_layers * num_directions, batch_size, n_hidden] h_0 = torch.zeros(1, batch_size, n_hidden).to(device) (enc_input_batch, dec_intput_batch, dec_output_batch) = ( enc_input_batch.to(device), dec_input_batch.to(device), dec_output_batch.to(device)) # enc_input_batch : [batch_size, n_step+1, n_class] # dec_intput_batch : [batch_size, n_step+1, n_class] # dec_output_batch : [batch_size, n_step+1], not one-hot pred = model(enc_input_batch, h_0, dec_intput_batch) # pred : [n_step+1, batch_size, n_class] pred = pred.transpose(0, 1) # [batch_size, n_step+1(=6), n_class] loss = 0 for i in range(len(dec_output_batch)): # pred[i] : [n_step+1, n_class] # dec_output_batch[i] : [n_step+1] loss += criterion(pred[i], dec_output_batch[i]) if (epoch + 1) % 1000 == 0: print('Epoch:', '%04d' % (epoch + 1), 'cost =', '{:.6f}'.format(loss)) optimizer.zero_grad() loss.backward() optimizer.step() #9 # Test def translate(word): enc_input, dec_input, _ = make_data([[word, '?' * n_step]]) enc_input, dec_input = enc_input.to(device), dec_input.to(device) # make hidden shape [num_layers * num_directions, batch_size, n_hidden] print("enc_input=",enc_input, "dec_input=",dec_input,"n_hidden=",n_hidden) hidden = torch.zeros(1, 1, n_hidden).to(device) output = model(enc_input, hidden, dec_input) # output : [n_step+1, batch_size, n_class] print("output.data=",output.data) predict = output.data.max(2, keepdim=True)[1] # select n_class dimension print("predict=",predict) decoded = [letter[i] for i in predict] translated = ''.join(decoded[:decoded.index('E')]) return translated.replace('?', '') print('test') print('man ->', translate('man')) print('mans ->', translate('mans')) print('king ->', translate('king')) print('black ->', translate('black')) print('up ->', translate('up')) #10 #执行结果

        [[17]],

        [[15]],

        [[ 7]],

        [[16]],

        [[17]],

        [[15]],

        [[ 7]],

        [[16]],

        [[23]],

        [[ 7]],

        [[ 7]],

        [[16]],

        [[10]],

        [[11]],

        [[22]],

        [[ 7]],

        [[17]],

        [[25]],

        [[16]],

        [[ 2]],

 

代码来自

https://github.com/wmathor/nlp-tutorial/blob/master/4-1.Seq2Seq/Seq2Seq_Torch.ipynb

 

 

 

版权声明:本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请联系我们举报,一经查实,本站将立刻删除。

发布者:全栈程序员-站长,转载请注明出处:https://javaforall.net/173034.html原文链接:https://javaforall.net

(0)
上一篇 2026年3月27日 上午9:36
下一篇 2026年3月27日 上午9:37


相关推荐

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注

关注全栈程序员社区公众号